Dec 16 12:25:32.332383 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:25:32.332409 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 16 12:25:32.332418 kernel: KASLR enabled Dec 16 12:25:32.332424 kernel: efi: EFI v2.7 by EDK II Dec 16 12:25:32.332429 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:25:32.332435 kernel: random: crng init done Dec 16 12:25:32.332442 kernel: secureboot: Secure boot disabled Dec 16 12:25:32.332448 kernel: ACPI: Early table checksum verification disabled Dec 16 12:25:32.332455 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:25:32.332462 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:25:32.332468 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332474 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332480 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332486 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332496 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332502 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332509 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332515 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332522 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:25:32.332528 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:25:32.332535 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:25:32.332541 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:25:32.332550 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:25:32.332556 kernel: Zone ranges: Dec 16 12:25:32.332563 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:25:32.332569 kernel: DMA32 empty Dec 16 12:25:32.332575 kernel: Normal empty Dec 16 12:25:32.332581 kernel: Device empty Dec 16 12:25:32.332588 kernel: Movable zone start for each node Dec 16 12:25:32.332594 kernel: Early memory node ranges Dec 16 12:25:32.332600 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:25:32.332607 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:25:32.332614 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:25:32.332620 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:25:32.332628 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:25:32.332634 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:25:32.332641 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:25:32.332647 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:25:32.332653 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:25:32.332660 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:25:32.332670 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:25:32.332677 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:25:32.332684 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:25:32.332691 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:25:32.332698 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:25:32.332705 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:25:32.332712 kernel: psci: probing for conduit method from ACPI. Dec 16 12:25:32.332719 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:25:32.332727 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:25:32.332734 kernel: psci: Trusted OS migration not required Dec 16 12:25:32.332740 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:25:32.332748 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:25:32.332755 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:25:32.332762 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:25:32.332769 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:25:32.332776 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:25:32.332783 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:25:32.332790 kernel: CPU features: detected: Spectre-v4 Dec 16 12:25:32.332797 kernel: CPU features: detected: Spectre-BHB Dec 16 12:25:32.332805 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:25:32.332812 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:25:32.332819 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:25:32.332825 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:25:32.332832 kernel: alternatives: applying boot alternatives Dec 16 12:25:32.332840 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:25:32.332847 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:25:32.332854 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:25:32.332860 kernel: Fallback order for Node 0: 0 Dec 16 12:25:32.332867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:25:32.332875 kernel: Policy zone: DMA Dec 16 12:25:32.332882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:25:32.332889 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:25:32.332896 kernel: software IO TLB: area num 4. Dec 16 12:25:32.332903 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:25:32.332910 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:25:32.332917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:25:32.332924 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:25:32.332932 kernel: rcu: RCU event tracing is enabled. Dec 16 12:25:32.332939 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:25:32.332946 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:25:32.332954 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:25:32.332961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:25:32.332968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:25:32.332975 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:25:32.332982 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:25:32.332990 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:25:32.332996 kernel: GICv3: 256 SPIs implemented Dec 16 12:25:32.333003 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:25:32.333010 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:25:32.333017 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:25:32.333024 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:25:32.333032 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:25:32.333039 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:25:32.333046 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:25:32.333053 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:25:32.333060 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:25:32.333067 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:25:32.333074 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:25:32.333081 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:25:32.333088 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:25:32.333095 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:25:32.333103 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:25:32.333111 kernel: arm-pv: using stolen time PV Dec 16 12:25:32.333118 kernel: Console: colour dummy device 80x25 Dec 16 12:25:32.333126 kernel: ACPI: Core revision 20240827 Dec 16 12:25:32.333133 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:25:32.333152 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:25:32.333160 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:25:32.333167 kernel: landlock: Up and running. Dec 16 12:25:32.333175 kernel: SELinux: Initializing. Dec 16 12:25:32.333184 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:25:32.333194 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:25:32.333204 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:25:32.333212 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:25:32.333221 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:25:32.333228 kernel: Remapping and enabling EFI services. Dec 16 12:25:32.333235 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:25:32.333244 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:25:32.333256 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:25:32.333265 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:25:32.333273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:25:32.333281 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:25:32.333288 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:25:32.333296 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:25:32.333306 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:25:32.333313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:25:32.333330 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:25:32.333338 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:25:32.333345 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:25:32.333353 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:25:32.333361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:25:32.333370 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:25:32.333377 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:25:32.333385 kernel: SMP: Total of 4 processors activated. Dec 16 12:25:32.333393 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:25:32.333401 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:25:32.333409 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:25:32.333417 kernel: CPU features: detected: Common not Private translations Dec 16 12:25:32.333426 kernel: CPU features: detected: CRC32 instructions Dec 16 12:25:32.333434 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:25:32.333442 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:25:32.333450 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:25:32.333457 kernel: CPU features: detected: Privileged Access Never Dec 16 12:25:32.333465 kernel: CPU features: detected: RAS Extension Support Dec 16 12:25:32.333473 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:25:32.333480 kernel: alternatives: applying system-wide alternatives Dec 16 12:25:32.333490 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:25:32.333499 kernel: Memory: 2450912K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 99040K reserved, 16384K cma-reserved) Dec 16 12:25:32.333508 kernel: devtmpfs: initialized Dec 16 12:25:32.333516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:25:32.333524 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:25:32.333531 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:25:32.333539 kernel: 0 pages in range for non-PLT usage Dec 16 12:25:32.333548 kernel: 515184 pages in range for PLT usage Dec 16 12:25:32.333555 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:25:32.333563 kernel: SMBIOS 3.0.0 present. Dec 16 12:25:32.333570 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:25:32.333578 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:25:32.333585 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:25:32.333593 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:25:32.333602 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:25:32.333610 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:25:32.333618 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:25:32.333626 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Dec 16 12:25:32.333633 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:25:32.333641 kernel: cpuidle: using governor menu Dec 16 12:25:32.333649 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:25:32.333657 kernel: ASID allocator initialised with 32768 entries Dec 16 12:25:32.333665 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:25:32.333673 kernel: Serial: AMBA PL011 UART driver Dec 16 12:25:32.333681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:25:32.333688 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:25:32.333696 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:25:32.333704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:25:32.333713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:25:32.333722 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:25:32.333729 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:25:32.333737 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:25:32.333744 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:25:32.333752 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:25:32.333759 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:25:32.333766 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:25:32.333776 kernel: ACPI: Interpreter enabled Dec 16 12:25:32.333783 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:25:32.333791 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:25:32.333798 kernel: ACPI: CPU0 has been hot-added Dec 16 12:25:32.333805 kernel: ACPI: CPU1 has been hot-added Dec 16 12:25:32.333813 kernel: ACPI: CPU2 has been hot-added Dec 16 12:25:32.333820 kernel: ACPI: CPU3 has been hot-added Dec 16 12:25:32.333829 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:25:32.333837 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:25:32.333845 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:25:32.334014 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:25:32.334127 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:25:32.334232 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:25:32.334327 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:25:32.334421 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:25:32.334432 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:25:32.334440 kernel: PCI host bridge to bus 0000:00 Dec 16 12:25:32.334530 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:25:32.334614 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:25:32.334701 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:25:32.334796 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:25:32.334900 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:25:32.334991 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:25:32.335077 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:25:32.335172 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:25:32.335254 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:25:32.335353 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:25:32.335442 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:25:32.335521 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:25:32.335596 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:25:32.335670 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:25:32.335742 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:25:32.335752 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:25:32.335760 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:25:32.335768 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:25:32.335775 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:25:32.335783 kernel: iommu: Default domain type: Translated Dec 16 12:25:32.335799 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:25:32.335807 kernel: efivars: Registered efivars operations Dec 16 12:25:32.335814 kernel: vgaarb: loaded Dec 16 12:25:32.335821 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:25:32.335829 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:25:32.335836 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:25:32.335844 kernel: pnp: PnP ACPI init Dec 16 12:25:32.335934 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:25:32.335944 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:25:32.335952 kernel: NET: Registered PF_INET protocol family Dec 16 12:25:32.335960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:25:32.335967 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:25:32.335975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:25:32.335983 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:25:32.335993 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:25:32.336000 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:25:32.336008 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:25:32.336015 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:25:32.336023 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:25:32.336031 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:25:32.336039 kernel: kvm [1]: HYP mode not available Dec 16 12:25:32.336048 kernel: Initialise system trusted keyrings Dec 16 12:25:32.336056 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:25:32.336064 kernel: Key type asymmetric registered Dec 16 12:25:32.336072 kernel: Asymmetric key parser 'x509' registered Dec 16 12:25:32.336080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:25:32.336088 kernel: io scheduler mq-deadline registered Dec 16 12:25:32.336096 kernel: io scheduler kyber registered Dec 16 12:25:32.336105 kernel: io scheduler bfq registered Dec 16 12:25:32.336113 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:25:32.336120 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:25:32.336128 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:25:32.336223 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:25:32.336258 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:25:32.336270 kernel: thunder_xcv, ver 1.0 Dec 16 12:25:32.336280 kernel: thunder_bgx, ver 1.0 Dec 16 12:25:32.336287 kernel: nicpf, ver 1.0 Dec 16 12:25:32.336295 kernel: nicvf, ver 1.0 Dec 16 12:25:32.336407 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:25:32.336492 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:25:31 UTC (1765887931) Dec 16 12:25:32.336503 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:25:32.336514 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:25:32.336522 kernel: watchdog: NMI not fully supported Dec 16 12:25:32.336530 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:25:32.336537 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:25:32.336545 kernel: Segment Routing with IPv6 Dec 16 12:25:32.336553 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:25:32.336560 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:25:32.336568 kernel: Key type dns_resolver registered Dec 16 12:25:32.336577 kernel: registered taskstats version 1 Dec 16 12:25:32.336585 kernel: Loading compiled-in X.509 certificates Dec 16 12:25:32.336593 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 16 12:25:32.336600 kernel: Demotion targets for Node 0: null Dec 16 12:25:32.336608 kernel: Key type .fscrypt registered Dec 16 12:25:32.336615 kernel: Key type fscrypt-provisioning registered Dec 16 12:25:32.336623 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:25:32.336632 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:25:32.336640 kernel: ima: No architecture policies found Dec 16 12:25:32.336648 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:25:32.336655 kernel: clk: Disabling unused clocks Dec 16 12:25:32.336663 kernel: PM: genpd: Disabling unused power domains Dec 16 12:25:32.336671 kernel: Freeing unused kernel memory: 12416K Dec 16 12:25:32.336678 kernel: Run /init as init process Dec 16 12:25:32.336688 kernel: with arguments: Dec 16 12:25:32.336695 kernel: /init Dec 16 12:25:32.336702 kernel: with environment: Dec 16 12:25:32.336710 kernel: HOME=/ Dec 16 12:25:32.336718 kernel: TERM=linux Dec 16 12:25:32.336810 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:25:32.336890 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 12:25:32.336903 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:25:32.336911 kernel: GPT:16515071 != 27000831 Dec 16 12:25:32.336919 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:25:32.336926 kernel: GPT:16515071 != 27000831 Dec 16 12:25:32.336934 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:25:32.336941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:25:32.336950 kernel: SCSI subsystem initialized Dec 16 12:25:32.336958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:25:32.336966 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:25:32.336974 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:25:32.336982 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:25:32.336989 kernel: raid6: neonx8 gen() 15626 MB/s Dec 16 12:25:32.336997 kernel: raid6: neonx4 gen() 15675 MB/s Dec 16 12:25:32.337006 kernel: raid6: neonx2 gen() 10961 MB/s Dec 16 12:25:32.337014 kernel: raid6: neonx1 gen() 8113 MB/s Dec 16 12:25:32.337021 kernel: raid6: int64x8 gen() 5969 MB/s Dec 16 12:25:32.337029 kernel: raid6: int64x4 gen() 6752 MB/s Dec 16 12:25:32.337036 kernel: raid6: int64x2 gen() 5115 MB/s Dec 16 12:25:32.337044 kernel: raid6: int64x1 gen() 4721 MB/s Dec 16 12:25:32.337052 kernel: raid6: using algorithm neonx4 gen() 15675 MB/s Dec 16 12:25:32.337061 kernel: raid6: .... xor() 11999 MB/s, rmw enabled Dec 16 12:25:32.337069 kernel: raid6: using neon recovery algorithm Dec 16 12:25:32.337076 kernel: xor: measuring software checksum speed Dec 16 12:25:32.337084 kernel: 8regs : 13810 MB/sec Dec 16 12:25:32.337091 kernel: 32regs : 21670 MB/sec Dec 16 12:25:32.337099 kernel: arm64_neon : 28051 MB/sec Dec 16 12:25:32.337106 kernel: xor: using function: arm64_neon (28051 MB/sec) Dec 16 12:25:32.337114 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:25:32.337123 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (204) Dec 16 12:25:32.337131 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 16 12:25:32.337149 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:25:32.337157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:25:32.337165 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:25:32.337172 kernel: loop: module loaded Dec 16 12:25:32.337180 kernel: loop0: detected capacity change from 0 to 91480 Dec 16 12:25:32.337189 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:25:32.337198 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:25:32.337208 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:25:32.337217 systemd[1]: Detected virtualization kvm. Dec 16 12:25:32.337225 systemd[1]: Detected architecture arm64. Dec 16 12:25:32.337234 systemd[1]: Running in initrd. Dec 16 12:25:32.337242 systemd[1]: No hostname configured, using default hostname. Dec 16 12:25:32.337250 systemd[1]: Hostname set to . Dec 16 12:25:32.337258 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:25:32.337266 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:25:32.337274 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:25:32.337283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:25:32.337293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:25:32.337301 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:25:32.337310 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:25:32.337328 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:25:32.337337 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:25:32.337348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:25:32.337356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:25:32.337364 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:25:32.337373 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:25:32.337381 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:25:32.337389 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:25:32.337397 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:25:32.337406 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:25:32.337415 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:25:32.337423 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:25:32.337432 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:25:32.337451 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:25:32.337462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:25:32.337471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:25:32.337480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:25:32.337488 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:25:32.337497 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:25:32.337505 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:25:32.337513 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:25:32.337523 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:25:32.337533 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:25:32.337542 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:25:32.337550 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:25:32.337559 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:25:32.337569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:25:32.337578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:25:32.337586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:25:32.337595 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:25:32.337604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:25:32.337634 systemd-journald[346]: Collecting audit messages is enabled. Dec 16 12:25:32.337659 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:25:32.337668 systemd-journald[346]: Journal started Dec 16 12:25:32.337690 systemd-journald[346]: Runtime Journal (/run/log/journal/52d24fe85f7b4e33b4c3460241b2c3ba) is 6M, max 48.5M, 42.4M free. Dec 16 12:25:32.338901 systemd-modules-load[347]: Inserted module 'br_netfilter' Dec 16 12:25:32.340402 kernel: Bridge firewalling registered Dec 16 12:25:32.340423 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:25:32.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.344091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:25:32.348720 kernel: audit: type=1130 audit(1765887932.340:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.348744 kernel: audit: type=1130 audit(1765887932.345:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.349616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:25:32.356427 kernel: audit: type=1130 audit(1765887932.350:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.353960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:25:32.358628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:25:32.367810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:25:32.373281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:25:32.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.378100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:25:32.382902 kernel: audit: type=1130 audit(1765887932.375:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.382932 kernel: audit: type=1130 audit(1765887932.379:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.382877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:25:32.388155 kernel: audit: type=1130 audit(1765887932.384:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.383734 systemd-tmpfiles[369]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:25:32.392475 kernel: audit: type=1334 audit(1765887932.390:8): prog-id=6 op=LOAD Dec 16 12:25:32.390000 audit: BPF prog-id=6 op=LOAD Dec 16 12:25:32.387518 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:25:32.391470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:25:32.399543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:25:32.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.404385 kernel: audit: type=1130 audit(1765887932.400:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.419067 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:25:32.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.421828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:25:32.424946 kernel: audit: type=1130 audit(1765887932.420:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.442694 systemd-resolved[378]: Positive Trust Anchors: Dec 16 12:25:32.442714 systemd-resolved[378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:25:32.442717 systemd-resolved[378]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:25:32.442749 systemd-resolved[378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:25:32.456474 dracut-cmdline[393]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:25:32.468960 systemd-resolved[378]: Defaulting to hostname 'linux'. Dec 16 12:25:32.469991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:25:32.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.471510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:25:32.539351 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:25:32.548345 kernel: iscsi: registered transport (tcp) Dec 16 12:25:32.562515 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:25:32.562563 kernel: QLogic iSCSI HBA Driver Dec 16 12:25:32.585813 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:25:32.604053 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:25:32.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.606394 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:25:32.659619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:25:32.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.662072 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:25:32.663839 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:25:32.708420 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:25:32.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.710000 audit: BPF prog-id=7 op=LOAD Dec 16 12:25:32.710000 audit: BPF prog-id=8 op=LOAD Dec 16 12:25:32.711288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:25:32.742056 systemd-udevd[632]: Using default interface naming scheme 'v257'. Dec 16 12:25:32.751027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:25:32.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.754024 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:25:32.780144 dracut-pre-trigger[684]: rd.md=0: removing MD RAID activation Dec 16 12:25:32.795887 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:25:32.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.798000 audit: BPF prog-id=9 op=LOAD Dec 16 12:25:32.799494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:25:32.821511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:25:32.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.824008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:25:32.855962 systemd-networkd[758]: lo: Link UP Dec 16 12:25:32.855970 systemd-networkd[758]: lo: Gained carrier Dec 16 12:25:32.857489 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:25:32.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.858517 systemd[1]: Reached target network.target - Network. Dec 16 12:25:32.889374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:25:32.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.892510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:25:32.946840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:25:32.961707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:25:32.971009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:25:32.978897 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:25:32.983471 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:25:32.989306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:25:32.989452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:25:32.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:32.991369 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:25:32.993775 systemd-networkd[758]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:25:32.993779 systemd-networkd[758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:25:32.994310 systemd-networkd[758]: eth0: Link UP Dec 16 12:25:32.995746 systemd-networkd[758]: eth0: Gained carrier Dec 16 12:25:32.995759 systemd-networkd[758]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:25:32.999568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:25:33.006029 disk-uuid[804]: Primary Header is updated. Dec 16 12:25:33.006029 disk-uuid[804]: Secondary Entries is updated. Dec 16 12:25:33.006029 disk-uuid[804]: Secondary Header is updated. Dec 16 12:25:33.015391 systemd-networkd[758]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:25:33.035484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:25:33.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:33.069719 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:25:33.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:33.071293 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:25:33.072648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:25:33.074451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:25:33.077233 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:25:33.103702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:25:33.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.040400 disk-uuid[806]: Warning: The kernel is still using the old partition table. Dec 16 12:25:34.040400 disk-uuid[806]: The new table will be used at the next reboot or after you Dec 16 12:25:34.040400 disk-uuid[806]: run partprobe(8) or kpartx(8) Dec 16 12:25:34.040400 disk-uuid[806]: The operation has completed successfully. Dec 16 12:25:34.050513 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:25:34.051628 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:25:34.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.053916 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:25:34.084144 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Dec 16 12:25:34.084195 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:25:34.084206 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:25:34.087552 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:25:34.087603 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:25:34.094380 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:25:34.095870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:25:34.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.098007 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:25:34.235805 ignition[854]: Ignition 2.22.0 Dec 16 12:25:34.235823 ignition[854]: Stage: fetch-offline Dec 16 12:25:34.235861 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:25:34.235871 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:25:34.236075 ignition[854]: parsed url from cmdline: "" Dec 16 12:25:34.236079 ignition[854]: no config URL provided Dec 16 12:25:34.236083 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:25:34.236093 ignition[854]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:25:34.236141 ignition[854]: op(1): [started] loading QEMU firmware config module Dec 16 12:25:34.236145 ignition[854]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:25:34.243194 ignition[854]: op(1): [finished] loading QEMU firmware config module Dec 16 12:25:34.292083 ignition[854]: parsing config with SHA512: 26a252117825f294a2d87bb8ba72c2896ad2ebe905cbae10dcbf8540e44ee9f09eeef98f69f7354689de2a6760e23c8a4a368e2cf959017a68611fba164cb534 Dec 16 12:25:34.298691 unknown[854]: fetched base config from "system" Dec 16 12:25:34.299086 ignition[854]: fetch-offline: fetch-offline passed Dec 16 12:25:34.298705 unknown[854]: fetched user config from "qemu" Dec 16 12:25:34.299157 ignition[854]: Ignition finished successfully Dec 16 12:25:34.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.300934 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:25:34.303149 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:25:34.304122 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:25:34.337461 ignition[867]: Ignition 2.22.0 Dec 16 12:25:34.337478 ignition[867]: Stage: kargs Dec 16 12:25:34.337621 ignition[867]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:25:34.337630 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:25:34.338450 ignition[867]: kargs: kargs passed Dec 16 12:25:34.338497 ignition[867]: Ignition finished successfully Dec 16 12:25:34.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.343422 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:25:34.345851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:25:34.394140 ignition[875]: Ignition 2.22.0 Dec 16 12:25:34.394157 ignition[875]: Stage: disks Dec 16 12:25:34.394340 ignition[875]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:25:34.394350 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:25:34.395254 ignition[875]: disks: disks passed Dec 16 12:25:34.395305 ignition[875]: Ignition finished successfully Dec 16 12:25:34.399384 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:25:34.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.400537 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:25:34.402035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:25:34.404001 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:25:34.405793 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:25:34.407365 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:25:34.410153 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:25:34.458303 systemd-fsck[885]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 12:25:34.463149 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:25:34.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.468002 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:25:34.547017 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:25:34.548360 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:25:34.550731 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 16 12:25:34.551099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:25:34.571291 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:25:34.572277 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:25:34.572330 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:25:34.572383 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:25:34.580257 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:25:34.583292 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:25:34.590617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Dec 16 12:25:34.590672 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:25:34.590683 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:25:34.597518 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:25:34.597558 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:25:34.600248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:25:34.632844 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:25:34.637597 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:25:34.643155 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:25:34.648026 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:25:34.740429 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:25:34.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.745387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:25:34.749400 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:25:34.773650 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:25:34.775014 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:25:34.788971 systemd-networkd[758]: eth0: Gained IPv6LL Dec 16 12:25:34.800864 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:25:34.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.809806 ignition[1008]: INFO : Ignition 2.22.0 Dec 16 12:25:34.809806 ignition[1008]: INFO : Stage: mount Dec 16 12:25:34.811457 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:25:34.811457 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:25:34.811457 ignition[1008]: INFO : mount: mount passed Dec 16 12:25:34.811457 ignition[1008]: INFO : Ignition finished successfully Dec 16 12:25:34.815893 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:25:34.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:34.818451 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:25:35.550793 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:25:35.571497 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1021) Dec 16 12:25:35.571563 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:25:35.571575 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:25:35.575880 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:25:35.575926 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:25:35.577451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:25:35.617219 ignition[1038]: INFO : Ignition 2.22.0 Dec 16 12:25:35.617219 ignition[1038]: INFO : Stage: files Dec 16 12:25:35.619085 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:25:35.619085 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:25:35.619085 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:25:35.622467 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:25:35.622467 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:25:35.625780 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:25:35.627311 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:25:35.627311 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:25:35.626578 unknown[1038]: wrote ssh authorized keys file for user: core Dec 16 12:25:35.631095 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:25:35.631095 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:25:35.667207 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:25:35.806820 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:25:35.806820 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:25:35.811413 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:25:35.840666 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:25:35.840666 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:25:35.840666 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:25:35.840666 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:25:35.840666 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:25:35.840666 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:25:36.118797 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:25:36.314388 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:25:36.314388 ignition[1038]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:25:36.318563 ignition[1038]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:25:36.322260 ignition[1038]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:25:36.322260 ignition[1038]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:25:36.322260 ignition[1038]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:25:36.328974 ignition[1038]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:25:36.328974 ignition[1038]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:25:36.328974 ignition[1038]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:25:36.328974 ignition[1038]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:25:36.341905 ignition[1038]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:25:36.346382 ignition[1038]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:25:36.348177 ignition[1038]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:25:36.348177 ignition[1038]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:25:36.348177 ignition[1038]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:25:36.348177 ignition[1038]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:25:36.348177 ignition[1038]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:25:36.348177 ignition[1038]: INFO : files: files passed Dec 16 12:25:36.348177 ignition[1038]: INFO : Ignition finished successfully Dec 16 12:25:36.371390 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 12:25:36.371433 kernel: audit: type=1130 audit(1765887936.350:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.348856 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:25:36.354627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:25:36.359355 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:25:36.379210 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:25:36.379603 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:25:36.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.390202 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:25:36.393586 kernel: audit: type=1130 audit(1765887936.385:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.393621 kernel: audit: type=1131 audit(1765887936.385:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.393633 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:25:36.393633 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:25:36.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.401219 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:25:36.394211 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:25:36.401299 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:25:36.406335 kernel: audit: type=1130 audit(1765887936.396:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.405533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:25:36.489992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:25:36.490114 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:25:36.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.495591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:25:36.496572 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:25:36.501262 kernel: audit: type=1130 audit(1765887936.492:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.501295 kernel: audit: type=1131 audit(1765887936.495:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.500707 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:25:36.501659 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:25:36.537649 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:25:36.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.541486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:25:36.544158 kernel: audit: type=1130 audit(1765887936.538:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.572783 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:25:36.572966 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:25:36.577265 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:25:36.583018 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:25:36.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.583983 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:25:36.589984 kernel: audit: type=1131 audit(1765887936.585:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.584144 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:25:36.589088 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:25:36.590993 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:25:36.592767 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:25:36.594548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:25:36.596344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:25:36.598444 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:25:36.600762 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:25:36.602582 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:25:36.604742 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:25:36.606272 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:25:36.608185 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:25:36.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.609578 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:25:36.616719 kernel: audit: type=1131 audit(1765887936.610:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.609725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:25:36.614939 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:25:36.616006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:25:36.617835 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:25:36.622440 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:25:36.623635 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:25:36.629345 kernel: audit: type=1131 audit(1765887936.625:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.623788 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:25:36.629448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:25:36.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.629609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:25:36.631479 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:25:36.632904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:25:36.637232 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:25:36.638551 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:25:36.640338 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:25:36.641780 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:25:36.641873 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:25:36.643159 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:25:36.643240 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:25:36.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.644868 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:25:36.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.644941 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:25:36.646451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:25:36.646575 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:25:36.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.648252 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:25:36.648429 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:25:36.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.650835 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:25:36.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.652222 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:25:36.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.652368 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:25:36.655261 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:25:36.656140 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:25:36.656264 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:25:36.658264 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:25:36.658380 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:25:36.659906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:25:36.660012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:25:36.666116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:25:36.677772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:25:36.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.692547 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:25:36.700495 ignition[1098]: INFO : Ignition 2.22.0 Dec 16 12:25:36.701799 ignition[1098]: INFO : Stage: umount Dec 16 12:25:36.701799 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:25:36.701799 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:25:36.705368 ignition[1098]: INFO : umount: umount passed Dec 16 12:25:36.705368 ignition[1098]: INFO : Ignition finished successfully Dec 16 12:25:36.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.705200 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:25:36.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.705353 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:25:36.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.706568 systemd[1]: Stopped target network.target - Network. Dec 16 12:25:36.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.707953 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:25:36.708028 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:25:36.709461 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:25:36.709517 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:25:36.710944 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:25:36.710997 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:25:36.712605 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:25:36.712656 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:25:36.714242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:25:36.718070 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:25:36.726860 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:25:36.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.727193 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:25:36.730631 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:25:36.730795 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:25:36.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.736728 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:25:36.736855 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:25:36.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.741000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:25:36.741000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:25:36.741837 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:25:36.742988 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:25:36.743035 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:25:36.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.744705 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:25:36.744782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:25:36.747241 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:25:36.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.748112 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:25:36.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.748182 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:25:36.750012 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:25:36.750063 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:25:36.751647 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:25:36.751692 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:25:36.753376 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:25:36.771013 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:25:36.771181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:25:36.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.773696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:25:36.773742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:25:36.776095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:25:36.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.776144 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:25:36.777940 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:25:36.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.778005 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:25:36.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.780464 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:25:36.780522 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:25:36.783003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:25:36.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.783062 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:25:36.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.786085 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:25:36.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.787057 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:25:36.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.787128 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:25:36.788966 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:25:36.789016 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:25:36.790630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:25:36.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:36.790681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:25:36.793193 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:25:36.793281 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:25:36.798277 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:25:36.798395 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:25:36.800112 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:25:36.802439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:25:36.836451 systemd[1]: Switching root. Dec 16 12:25:36.869884 systemd-journald[346]: Journal stopped Dec 16 12:25:37.782087 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Dec 16 12:25:37.782147 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:25:37.782167 kernel: SELinux: policy capability open_perms=1 Dec 16 12:25:37.782178 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:25:37.782188 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:25:37.782202 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:25:37.782213 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:25:37.782223 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:25:37.782233 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:25:37.782244 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:25:37.782257 systemd[1]: Successfully loaded SELinux policy in 70.108ms. Dec 16 12:25:37.782274 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.343ms. Dec 16 12:25:37.783874 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:25:37.783891 systemd[1]: Detected virtualization kvm. Dec 16 12:25:37.783902 systemd[1]: Detected architecture arm64. Dec 16 12:25:37.783912 systemd[1]: Detected first boot. Dec 16 12:25:37.783923 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:25:37.783933 zram_generator::config[1142]: No configuration found. Dec 16 12:25:37.783949 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:25:37.783962 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:25:37.783979 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:25:37.783990 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:25:37.784005 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:25:37.784017 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:25:37.784030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:25:37.784043 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:25:37.784055 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:25:37.784066 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:25:37.784078 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:25:37.784090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:25:37.784102 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:25:37.784119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:25:37.784137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:25:37.784149 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:25:37.784160 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:25:37.784172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:25:37.784184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:25:37.784195 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:25:37.784207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:25:37.784221 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:25:37.784234 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:25:37.784246 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:25:37.784259 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:25:37.784270 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:25:37.784282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:25:37.784295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:25:37.784307 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:25:37.784327 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:25:37.784343 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:25:37.784355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:25:37.784367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:25:37.784379 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:25:37.784392 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:25:37.784404 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:25:37.784415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:25:37.784426 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:25:37.784437 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:25:37.784448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:25:37.784467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:25:37.784482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:25:37.784493 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:25:37.784503 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:25:37.784515 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:25:37.784526 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:25:37.784536 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:25:37.784548 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:25:37.784560 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:25:37.784575 systemd[1]: Reached target machines.target - Containers. Dec 16 12:25:37.784589 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:25:37.784600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:25:37.784611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:25:37.784622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:25:37.784632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:25:37.784647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:25:37.784658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:25:37.784672 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:25:37.784685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:25:37.784699 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:25:37.784711 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:25:37.784722 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:25:37.784734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:25:37.784745 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:25:37.784756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:25:37.784767 kernel: fuse: init (API version 7.41) Dec 16 12:25:37.784777 kernel: ACPI: bus type drm_connector registered Dec 16 12:25:37.784787 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:25:37.784800 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:25:37.784812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:25:37.784823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:25:37.784857 systemd-journald[1211]: Collecting audit messages is enabled. Dec 16 12:25:37.784883 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:25:37.784895 systemd-journald[1211]: Journal started Dec 16 12:25:37.784917 systemd-journald[1211]: Runtime Journal (/run/log/journal/52d24fe85f7b4e33b4c3460241b2c3ba) is 6M, max 48.5M, 42.4M free. Dec 16 12:25:37.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.739000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:25:37.739000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:25:37.744000 audit: BPF prog-id=15 op=LOAD Dec 16 12:25:37.744000 audit: BPF prog-id=16 op=LOAD Dec 16 12:25:37.745000 audit: BPF prog-id=17 op=LOAD Dec 16 12:25:37.780000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:25:37.780000 audit[1211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd294cba0 a2=4000 a3=0 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:37.780000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:25:37.544618 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:25:37.566525 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:25:37.567001 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:25:37.789184 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:25:37.793229 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:25:37.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.794488 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:25:37.795880 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:25:37.797148 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:25:37.798286 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:25:37.799484 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:25:37.800742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:25:37.803386 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:25:37.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.804753 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:25:37.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.806220 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:25:37.806402 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:25:37.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.807708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:25:37.807864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:25:37.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.809285 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:25:37.809459 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:25:37.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.810700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:25:37.810858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:25:37.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.812209 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:25:37.812375 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:25:37.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.813895 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:25:37.814057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:25:37.815443 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:25:37.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.817069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:25:37.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.819466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:25:37.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.821001 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:25:37.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.836142 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:25:37.838161 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:25:37.840969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:25:37.843699 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:25:37.844975 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:25:37.845019 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:25:37.847248 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:25:37.849037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:25:37.849184 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:25:37.857575 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:25:37.860138 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:25:37.861614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:25:37.864283 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:25:37.865420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:25:37.866752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:25:37.870857 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:25:37.872782 systemd-journald[1211]: Time spent on flushing to /var/log/journal/52d24fe85f7b4e33b4c3460241b2c3ba is 13.825ms for 1004 entries. Dec 16 12:25:37.872782 systemd-journald[1211]: System Journal (/var/log/journal/52d24fe85f7b4e33b4c3460241b2c3ba) is 8M, max 163.5M, 155.5M free. Dec 16 12:25:37.892336 systemd-journald[1211]: Received client request to flush runtime journal. Dec 16 12:25:37.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.874386 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:25:37.886428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:25:37.888037 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:25:37.889672 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:25:37.894383 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:25:37.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.899348 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 12:25:37.900474 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:25:37.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.901852 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:25:37.905805 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:25:37.919839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:25:37.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.922502 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:25:37.925000 audit: BPF prog-id=18 op=LOAD Dec 16 12:25:37.925000 audit: BPF prog-id=19 op=LOAD Dec 16 12:25:37.925000 audit: BPF prog-id=20 op=LOAD Dec 16 12:25:37.926646 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:25:37.929000 audit: BPF prog-id=21 op=LOAD Dec 16 12:25:37.931574 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:25:37.934528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:25:37.935441 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 12:25:37.942000 audit: BPF prog-id=22 op=LOAD Dec 16 12:25:37.942000 audit: BPF prog-id=23 op=LOAD Dec 16 12:25:37.942000 audit: BPF prog-id=24 op=LOAD Dec 16 12:25:37.943585 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:25:37.945000 audit: BPF prog-id=25 op=LOAD Dec 16 12:25:37.948000 audit: BPF prog-id=26 op=LOAD Dec 16 12:25:37.948000 audit: BPF prog-id=27 op=LOAD Dec 16 12:25:37.950524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:25:37.955397 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:25:37.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.970552 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Dec 16 12:25:37.970576 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Dec 16 12:25:37.974339 kernel: loop3: detected capacity change from 0 to 109872 Dec 16 12:25:37.977419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:25:37.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:37.992901 systemd-nsresourced[1277]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:25:37.994218 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:25:37.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.004346 kernel: loop4: detected capacity change from 0 to 100192 Dec 16 12:25:38.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.007634 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:25:38.013358 kernel: loop5: detected capacity change from 0 to 200800 Dec 16 12:25:38.023390 kernel: loop6: detected capacity change from 0 to 109872 Dec 16 12:25:38.028043 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 12:25:38.031448 (sd-merge)[1292]: Merged extensions into '/usr'. Dec 16 12:25:38.037467 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:25:38.037484 systemd[1]: Reloading... Dec 16 12:25:38.064513 systemd-oomd[1274]: No swap; memory pressure usage will be degraded Dec 16 12:25:38.071123 systemd-resolved[1275]: Positive Trust Anchors: Dec 16 12:25:38.071141 systemd-resolved[1275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:25:38.071145 systemd-resolved[1275]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:25:38.071177 systemd-resolved[1275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:25:38.082696 systemd-resolved[1275]: Defaulting to hostname 'linux'. Dec 16 12:25:38.087351 zram_generator::config[1328]: No configuration found. Dec 16 12:25:38.246355 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:25:38.246589 systemd[1]: Reloading finished in 208 ms. Dec 16 12:25:38.277225 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:25:38.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.278662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:25:38.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.280059 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:25:38.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.283996 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:25:38.297789 systemd[1]: Starting ensure-sysext.service... Dec 16 12:25:38.301000 audit: BPF prog-id=28 op=LOAD Dec 16 12:25:38.301000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:25:38.301000 audit: BPF prog-id=29 op=LOAD Dec 16 12:25:38.301000 audit: BPF prog-id=30 op=LOAD Dec 16 12:25:38.301000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:25:38.301000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:25:38.301000 audit: BPF prog-id=31 op=LOAD Dec 16 12:25:38.301000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:25:38.301000 audit: BPF prog-id=32 op=LOAD Dec 16 12:25:38.302000 audit: BPF prog-id=33 op=LOAD Dec 16 12:25:38.302000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:25:38.302000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:25:38.299816 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:25:38.303000 audit: BPF prog-id=34 op=LOAD Dec 16 12:25:38.303000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:25:38.303000 audit: BPF prog-id=35 op=LOAD Dec 16 12:25:38.303000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:25:38.303000 audit: BPF prog-id=36 op=LOAD Dec 16 12:25:38.304000 audit: BPF prog-id=37 op=LOAD Dec 16 12:25:38.304000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:25:38.304000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:25:38.304000 audit: BPF prog-id=38 op=LOAD Dec 16 12:25:38.304000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:25:38.304000 audit: BPF prog-id=39 op=LOAD Dec 16 12:25:38.304000 audit: BPF prog-id=40 op=LOAD Dec 16 12:25:38.304000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:25:38.304000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:25:38.310020 systemd[1]: Reload requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:25:38.310050 systemd[1]: Reloading... Dec 16 12:25:38.323838 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:25:38.323865 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:25:38.324157 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:25:38.325137 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Dec 16 12:25:38.325197 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Dec 16 12:25:38.337001 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:25:38.337016 systemd-tmpfiles[1363]: Skipping /boot Dec 16 12:25:38.346149 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:25:38.346170 systemd-tmpfiles[1363]: Skipping /boot Dec 16 12:25:38.371388 zram_generator::config[1395]: No configuration found. Dec 16 12:25:38.511863 systemd[1]: Reloading finished in 201 ms. Dec 16 12:25:38.533342 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:25:38.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.537000 audit: BPF prog-id=41 op=LOAD Dec 16 12:25:38.537000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:25:38.537000 audit: BPF prog-id=42 op=LOAD Dec 16 12:25:38.537000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:25:38.537000 audit: BPF prog-id=43 op=LOAD Dec 16 12:25:38.537000 audit: BPF prog-id=44 op=LOAD Dec 16 12:25:38.537000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:25:38.537000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:25:38.538000 audit: BPF prog-id=45 op=LOAD Dec 16 12:25:38.538000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:25:38.538000 audit: BPF prog-id=46 op=LOAD Dec 16 12:25:38.538000 audit: BPF prog-id=47 op=LOAD Dec 16 12:25:38.538000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:25:38.538000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:25:38.538000 audit: BPF prog-id=48 op=LOAD Dec 16 12:25:38.538000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:25:38.538000 audit: BPF prog-id=49 op=LOAD Dec 16 12:25:38.538000 audit: BPF prog-id=50 op=LOAD Dec 16 12:25:38.538000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:25:38.538000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:25:38.540000 audit: BPF prog-id=51 op=LOAD Dec 16 12:25:38.540000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:25:38.540000 audit: BPF prog-id=52 op=LOAD Dec 16 12:25:38.540000 audit: BPF prog-id=53 op=LOAD Dec 16 12:25:38.540000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:25:38.540000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:25:38.560206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:25:38.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.568012 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:25:38.570740 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:25:38.584892 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:25:38.589678 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:25:38.590000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:25:38.590000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:25:38.591000 audit: BPF prog-id=54 op=LOAD Dec 16 12:25:38.591000 audit: BPF prog-id=55 op=LOAD Dec 16 12:25:38.592679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:25:38.597699 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:25:38.602929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:25:38.605594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:25:38.611154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:25:38.614557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:25:38.615848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:25:38.616061 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:25:38.616167 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:25:38.618393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:25:38.618565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:25:38.618706 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:25:38.618789 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:25:38.622939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:25:38.625742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:25:38.627109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:25:38.627337 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:25:38.627430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:25:38.631000 audit[1443]: SYSTEM_BOOT pid=1443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.633555 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:25:38.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.639861 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:25:38.640441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:25:38.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.642935 systemd[1]: Finished ensure-sysext.service. Dec 16 12:25:38.644871 systemd-udevd[1440]: Using default interface naming scheme 'v257'. Dec 16 12:25:38.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.645907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:25:38.648713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:25:38.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.650604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:25:38.650856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:25:38.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.652956 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:25:38.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.654491 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:25:38.656055 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:25:38.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.665934 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:25:38.666216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:25:38.667000 audit: BPF prog-id=56 op=LOAD Dec 16 12:25:38.668712 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:25:38.670105 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:25:38.671846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:25:38.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.674007 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:25:38.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:38.681000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:25:38.681000 audit[1475]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd784ff40 a2=420 a3=0 items=0 ppid=1431 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:38.681000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:25:38.682305 augenrules[1475]: No rules Dec 16 12:25:38.686981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:25:38.689178 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:25:38.689486 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:25:38.740739 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:25:38.742418 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:25:38.773393 systemd-networkd[1494]: lo: Link UP Dec 16 12:25:38.774020 systemd-networkd[1494]: lo: Gained carrier Dec 16 12:25:38.775098 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:25:38.775763 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:25:38.775772 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:25:38.777510 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:25:38.777815 systemd-networkd[1494]: eth0: Link UP Dec 16 12:25:38.778697 systemd-networkd[1494]: eth0: Gained carrier Dec 16 12:25:38.778722 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:25:38.779010 systemd[1]: Reached target network.target - Network. Dec 16 12:25:38.782503 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:25:38.784832 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:25:38.795363 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:25:38.797454 systemd-timesyncd[1466]: Network configuration changed, trying to establish connection. Dec 16 12:25:38.798703 systemd-timesyncd[1466]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:25:38.798764 systemd-timesyncd[1466]: Initial clock synchronization to Tue 2025-12-16 12:25:38.947835 UTC. Dec 16 12:25:38.820798 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:25:38.856514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:25:38.859450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:25:38.885549 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:25:38.936797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:25:38.945686 ldconfig[1433]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:25:38.985205 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:25:38.987826 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:25:39.003747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:25:39.011124 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:25:39.012704 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:25:39.013863 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:25:39.015033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:25:39.016373 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:25:39.017456 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:25:39.018613 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:25:39.019987 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:25:39.021017 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:25:39.022136 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:25:39.022177 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:25:39.023031 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:25:39.025049 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:25:39.027569 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:25:39.030775 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:25:39.032152 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:25:39.033340 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:25:39.038355 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:25:39.039684 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:25:39.041449 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:25:39.042491 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:25:39.043303 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:25:39.044141 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:25:39.044177 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:25:39.045483 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:25:39.047668 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:25:39.049867 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:25:39.052163 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:25:39.054450 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:25:39.055510 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:25:39.056898 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:25:39.061481 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:25:39.062943 jq[1543]: false Dec 16 12:25:39.063631 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:25:39.066369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:25:39.070050 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:25:39.071187 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:25:39.071756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:25:39.074520 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:25:39.074704 extend-filesystems[1544]: Found /dev/vda6 Dec 16 12:25:39.077212 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:25:39.080270 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:25:39.080689 extend-filesystems[1544]: Found /dev/vda9 Dec 16 12:25:39.086416 extend-filesystems[1544]: Checking size of /dev/vda9 Dec 16 12:25:39.081994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:25:39.082257 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:25:39.084772 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:25:39.085033 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:25:39.087576 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:25:39.087795 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:25:39.101858 jq[1559]: true Dec 16 12:25:39.112255 update_engine[1555]: I20251216 12:25:39.111886 1555 main.cc:92] Flatcar Update Engine starting Dec 16 12:25:39.119919 jq[1583]: true Dec 16 12:25:39.122931 tar[1567]: linux-arm64/LICENSE Dec 16 12:25:39.123183 tar[1567]: linux-arm64/helm Dec 16 12:25:39.135147 extend-filesystems[1544]: Resized partition /dev/vda9 Dec 16 12:25:39.137844 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:25:39.138690 systemd-logind[1553]: New seat seat0. Dec 16 12:25:39.140639 dbus-daemon[1541]: [system] SELinux support is enabled Dec 16 12:25:39.140802 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:25:39.142710 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:25:39.143664 extend-filesystems[1595]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:25:39.146908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:25:39.146941 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:25:39.147851 dbus-daemon[1541]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:25:39.149606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:25:39.149638 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:25:39.157448 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:25:39.159046 update_engine[1555]: I20251216 12:25:39.158759 1555 update_check_scheduler.cc:74] Next update check in 3m47s Dec 16 12:25:39.161408 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:25:39.181379 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 12:25:39.224977 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 12:25:39.230671 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:25:39.248447 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:25:39.248447 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:25:39.248447 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 12:25:39.253551 extend-filesystems[1544]: Resized filesystem in /dev/vda9 Dec 16 12:25:39.255255 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:25:39.251972 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:25:39.252499 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:25:39.256096 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:25:39.258843 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:25:39.289907 containerd[1582]: time="2025-12-16T12:25:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:25:39.290731 containerd[1582]: time="2025-12-16T12:25:39.290688113Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303372438Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="45.34µs" Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303432271Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303498442Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303516933Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303678021Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303700267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303755255Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.303767488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.304065401Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.304081429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.304097497Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304378 containerd[1582]: time="2025-12-16T12:25:39.304109852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304707 containerd[1582]: time="2025-12-16T12:25:39.304262583Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304707 containerd[1582]: time="2025-12-16T12:25:39.304276350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:25:39.304830 containerd[1582]: time="2025-12-16T12:25:39.304801523Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.305164 containerd[1582]: time="2025-12-16T12:25:39.305103029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.305269 containerd[1582]: time="2025-12-16T12:25:39.305252611Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:25:39.305342 containerd[1582]: time="2025-12-16T12:25:39.305313130Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:25:39.305423 containerd[1582]: time="2025-12-16T12:25:39.305408693Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:25:39.305680 containerd[1582]: time="2025-12-16T12:25:39.305661590Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:25:39.305810 containerd[1582]: time="2025-12-16T12:25:39.305792923Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:25:39.317371 containerd[1582]: time="2025-12-16T12:25:39.317324517Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:25:39.317550 containerd[1582]: time="2025-12-16T12:25:39.317534981Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:25:39.317766 containerd[1582]: time="2025-12-16T12:25:39.317737856Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:25:39.317827 containerd[1582]: time="2025-12-16T12:25:39.317814322Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:25:39.317918 containerd[1582]: time="2025-12-16T12:25:39.317904920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:25:39.317986 containerd[1582]: time="2025-12-16T12:25:39.317973110Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:25:39.318056 containerd[1582]: time="2025-12-16T12:25:39.318027654Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:25:39.318112 containerd[1582]: time="2025-12-16T12:25:39.318099598Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:25:39.318172 containerd[1582]: time="2025-12-16T12:25:39.318151639Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:25:39.318223 containerd[1582]: time="2025-12-16T12:25:39.318210867Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:25:39.318286 containerd[1582]: time="2025-12-16T12:25:39.318274212Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:25:39.318367 containerd[1582]: time="2025-12-16T12:25:39.318353787Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:25:39.318433 containerd[1582]: time="2025-12-16T12:25:39.318421897Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:25:39.318485 containerd[1582]: time="2025-12-16T12:25:39.318475270Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:25:39.318717 containerd[1582]: time="2025-12-16T12:25:39.318699865Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:25:39.318800 containerd[1582]: time="2025-12-16T12:25:39.318785738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:25:39.318866 containerd[1582]: time="2025-12-16T12:25:39.318853040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:25:39.318945 containerd[1582]: time="2025-12-16T12:25:39.318931324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:25:39.319027 containerd[1582]: time="2025-12-16T12:25:39.319014331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:25:39.319109 containerd[1582]: time="2025-12-16T12:25:39.319095682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:25:39.319176 containerd[1582]: time="2025-12-16T12:25:39.319163994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:25:39.319250 containerd[1582]: time="2025-12-16T12:25:39.319226935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:25:39.319302 containerd[1582]: time="2025-12-16T12:25:39.319290442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:25:39.319395 containerd[1582]: time="2025-12-16T12:25:39.319380514Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:25:39.319456 containerd[1582]: time="2025-12-16T12:25:39.319436754Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:25:39.319550 containerd[1582]: time="2025-12-16T12:25:39.319534860Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:25:39.319653 containerd[1582]: time="2025-12-16T12:25:39.319638861Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:25:39.320404 containerd[1582]: time="2025-12-16T12:25:39.319690418Z" level=info msg="Start snapshots syncer" Dec 16 12:25:39.320404 containerd[1582]: time="2025-12-16T12:25:39.319724816Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:25:39.320404 containerd[1582]: time="2025-12-16T12:25:39.319982517Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320033064Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320100608Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320228550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320251644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320264886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320276190Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320288302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320305420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320317250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320350638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320371673Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320412772Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320430092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:25:39.320682 containerd[1582]: time="2025-12-16T12:25:39.320439701Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320450602Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320460292Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320471434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320482577Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320495618Z" level=info msg="runtime interface created" Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320500705Z" level=info msg="created NRI interface" Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320509022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320520367Z" level=info msg="Connect containerd service" Dec 16 12:25:39.320929 containerd[1582]: time="2025-12-16T12:25:39.320547578Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:25:39.321374 containerd[1582]: time="2025-12-16T12:25:39.321246153Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:25:39.403014 containerd[1582]: time="2025-12-16T12:25:39.402948089Z" level=info msg="Start subscribing containerd event" Dec 16 12:25:39.403014 containerd[1582]: time="2025-12-16T12:25:39.403031701Z" level=info msg="Start recovering state" Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403142243Z" level=info msg="Start event monitor" Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403163035Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403172603Z" level=info msg="Start streaming server" Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403181485Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403188752Z" level=info msg="runtime interface starting up..." Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403195696Z" level=info msg="starting plugins..." Dec 16 12:25:39.403311 containerd[1582]: time="2025-12-16T12:25:39.403211967Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:25:39.403767 containerd[1582]: time="2025-12-16T12:25:39.403724866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:25:39.403824 containerd[1582]: time="2025-12-16T12:25:39.403794146Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:25:39.404314 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:25:39.406512 containerd[1582]: time="2025-12-16T12:25:39.406426065Z" level=info msg="containerd successfully booted in 0.117244s" Dec 16 12:25:39.489963 tar[1567]: linux-arm64/README.md Dec 16 12:25:39.518515 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:25:39.836737 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:25:39.860208 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:25:39.863875 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:25:39.896451 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:25:39.896753 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:25:39.899653 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:25:39.918466 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:25:39.921773 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:25:39.924158 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:25:39.925536 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:25:40.483868 systemd-networkd[1494]: eth0: Gained IPv6LL Dec 16 12:25:40.486747 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:25:40.488560 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:25:40.498541 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:25:40.514459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:40.516739 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:25:40.542672 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:25:40.542978 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:25:40.545994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:25:40.564560 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:25:41.133313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:41.135772 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:25:41.138387 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:25:41.142670 systemd[1]: Startup finished in 1.526s (kernel) + 5.051s (initrd) + 4.096s (userspace) = 10.675s. Dec 16 12:25:41.512682 kubelet[1678]: E1216 12:25:41.512611 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:25:41.514545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:25:41.514680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:25:41.515364 systemd[1]: kubelet.service: Consumed 714ms CPU time, 249.4M memory peak. Dec 16 12:25:43.779513 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:25:43.781015 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:34480.service - OpenSSH per-connection server daemon (10.0.0.1:34480). Dec 16 12:25:43.877292 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 34480 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:43.880585 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:43.888992 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:25:43.889966 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:25:43.896500 systemd-logind[1553]: New session 1 of user core. Dec 16 12:25:43.924416 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:25:43.928643 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:25:43.946184 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:25:43.949505 systemd-logind[1553]: New session c1 of user core. Dec 16 12:25:44.074547 systemd[1697]: Queued start job for default target default.target. Dec 16 12:25:44.098535 systemd[1697]: Created slice app.slice - User Application Slice. Dec 16 12:25:44.098575 systemd[1697]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:25:44.098590 systemd[1697]: Reached target paths.target - Paths. Dec 16 12:25:44.098657 systemd[1697]: Reached target timers.target - Timers. Dec 16 12:25:44.100356 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:25:44.101321 systemd[1697]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:25:44.117960 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:25:44.118060 systemd[1697]: Reached target sockets.target - Sockets. Dec 16 12:25:44.119530 systemd[1697]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:25:44.119642 systemd[1697]: Reached target basic.target - Basic System. Dec 16 12:25:44.119698 systemd[1697]: Reached target default.target - Main User Target. Dec 16 12:25:44.119725 systemd[1697]: Startup finished in 157ms. Dec 16 12:25:44.119970 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:25:44.130608 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:25:44.159712 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:34484.service - OpenSSH per-connection server daemon (10.0.0.1:34484). Dec 16 12:25:44.230760 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 34484 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:44.232173 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.236724 systemd-logind[1553]: New session 2 of user core. Dec 16 12:25:44.247638 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:25:44.259967 sshd[1713]: Connection closed by 10.0.0.1 port 34484 Dec 16 12:25:44.260514 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:44.273247 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:34484.service: Deactivated successfully. Dec 16 12:25:44.275487 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:25:44.277419 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:25:44.280949 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:34498.service - OpenSSH per-connection server daemon (10.0.0.1:34498). Dec 16 12:25:44.281827 systemd-logind[1553]: Removed session 2. Dec 16 12:25:44.362216 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 34498 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:44.363551 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.368415 systemd-logind[1553]: New session 3 of user core. Dec 16 12:25:44.385587 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:25:44.394070 sshd[1722]: Connection closed by 10.0.0.1 port 34498 Dec 16 12:25:44.394546 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:44.406826 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:34498.service: Deactivated successfully. Dec 16 12:25:44.408633 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:25:44.409443 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:25:44.412055 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:34502.service - OpenSSH per-connection server daemon (10.0.0.1:34502). Dec 16 12:25:44.413014 systemd-logind[1553]: Removed session 3. Dec 16 12:25:44.472130 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:44.473573 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.478505 systemd-logind[1553]: New session 4 of user core. Dec 16 12:25:44.486697 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:25:44.505671 sshd[1731]: Connection closed by 10.0.0.1 port 34502 Dec 16 12:25:44.506536 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:44.522712 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:34502.service: Deactivated successfully. Dec 16 12:25:44.524680 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:25:44.525447 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:25:44.528320 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:34516.service - OpenSSH per-connection server daemon (10.0.0.1:34516). Dec 16 12:25:44.529033 systemd-logind[1553]: Removed session 4. Dec 16 12:25:44.586833 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 34516 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:44.588192 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.592743 systemd-logind[1553]: New session 5 of user core. Dec 16 12:25:44.606552 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:25:44.629060 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:25:44.629388 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:44.649365 sudo[1741]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:44.651225 sshd[1740]: Connection closed by 10.0.0.1 port 34516 Dec 16 12:25:44.651813 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:44.667930 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:34516.service: Deactivated successfully. Dec 16 12:25:44.669952 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:25:44.672051 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:25:44.676118 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:34530.service - OpenSSH per-connection server daemon (10.0.0.1:34530). Dec 16 12:25:44.677248 systemd-logind[1553]: Removed session 5. Dec 16 12:25:44.743514 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 34530 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:44.746125 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.751436 systemd-logind[1553]: New session 6 of user core. Dec 16 12:25:44.760595 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:25:44.777787 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:25:44.778521 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:44.785164 sudo[1752]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:44.794720 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:25:44.795555 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:44.810069 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:25:44.869000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:25:44.870798 augenrules[1774]: No rules Dec 16 12:25:44.873291 kernel: kauditd_printk_skb: 177 callbacks suppressed Dec 16 12:25:44.873386 kernel: audit: type=1305 audit(1765887944.869:220): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:25:44.873420 kernel: audit: type=1300 audit(1765887944.869:220): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe48254d0 a2=420 a3=0 items=0 ppid=1755 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:44.869000 audit[1774]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe48254d0 a2=420 a3=0 items=0 ppid=1755 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:44.874044 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:25:44.874435 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:25:44.869000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:25:44.878479 kernel: audit: type=1327 audit(1765887944.869:220): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:25:44.878512 kernel: audit: type=1130 audit(1765887944.874:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.878565 sudo[1751]: pam_unix(sudo:session): session closed for user root Dec 16 12:25:44.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.882284 sshd[1750]: Connection closed by 10.0.0.1 port 34530 Dec 16 12:25:44.882818 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Dec 16 12:25:44.886027 kernel: audit: type=1131 audit(1765887944.874:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.877000 audit[1751]: USER_END pid=1751 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.887262 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:34530.service: Deactivated successfully. Dec 16 12:25:44.889211 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:25:44.889746 kernel: audit: type=1106 audit(1765887944.877:223): pid=1751 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.877000 audit[1751]: CRED_DISP pid=1751 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.893270 kernel: audit: type=1104 audit(1765887944.877:224): pid=1751 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.893346 kernel: audit: type=1106 audit(1765887944.882:225): pid=1747 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:44.882000 audit[1747]: USER_END pid=1747 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:44.882000 audit[1747]: CRED_DISP pid=1747 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:44.903266 kernel: audit: type=1104 audit(1765887944.882:226): pid=1747 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:44.903337 kernel: audit: type=1131 audit(1765887944.886:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.45:22-10.0.0.1:34530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.45:22-10.0.0.1:34530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.917721 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:25:44.919774 systemd-logind[1553]: Removed session 6. Dec 16 12:25:44.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.45:22-10.0.0.1:34534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:44.922510 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:34534.service - OpenSSH per-connection server daemon (10.0.0.1:34534). Dec 16 12:25:44.981000 audit[1783]: USER_ACCT pid=1783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:44.983180 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 34534 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:25:44.982000 audit[1783]: CRED_ACQ pid=1783 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:44.982000 audit[1783]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd21c4e20 a2=3 a3=0 items=0 ppid=1 pid=1783 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:44.982000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:25:44.984470 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:25:44.991046 systemd-logind[1553]: New session 7 of user core. Dec 16 12:25:44.998612 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:25:44.999000 audit[1783]: USER_START pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:45.000000 audit[1786]: CRED_ACQ pid=1786 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:25:45.009000 audit[1787]: USER_ACCT pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:45.011464 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:25:45.010000 audit[1787]: CRED_REFR pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:45.011789 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:25:45.014000 audit[1787]: USER_START pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:25:45.339162 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:25:45.349053 (dockerd)[1808]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:25:45.583579 dockerd[1808]: time="2025-12-16T12:25:45.583518895Z" level=info msg="Starting up" Dec 16 12:25:45.584862 dockerd[1808]: time="2025-12-16T12:25:45.584796011Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:25:45.601101 dockerd[1808]: time="2025-12-16T12:25:45.600987127Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:25:45.618085 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport627054939-merged.mount: Deactivated successfully. Dec 16 12:25:45.754856 dockerd[1808]: time="2025-12-16T12:25:45.754789507Z" level=info msg="Loading containers: start." Dec 16 12:25:45.765368 kernel: Initializing XFRM netlink socket Dec 16 12:25:45.821000 audit[1863]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.821000 audit[1863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffea41dc30 a2=0 a3=0 items=0 ppid=1808 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:25:45.824000 audit[1865]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.824000 audit[1865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd69d34e0 a2=0 a3=0 items=0 ppid=1808 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:25:45.826000 audit[1867]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.826000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff191b230 a2=0 a3=0 items=0 ppid=1808 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:25:45.829000 audit[1869]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.829000 audit[1869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde2591e0 a2=0 a3=0 items=0 ppid=1808 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:25:45.831000 audit[1871]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.831000 audit[1871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe298ffe0 a2=0 a3=0 items=0 ppid=1808 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:25:45.834000 audit[1873]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.834000 audit[1873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeab25e40 a2=0 a3=0 items=0 ppid=1808 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:45.836000 audit[1875]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.836000 audit[1875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd31175e0 a2=0 a3=0 items=0 ppid=1808 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:25:45.839000 audit[1877]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.839000 audit[1877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffebdce750 a2=0 a3=0 items=0 ppid=1808 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:25:45.871000 audit[1880]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.871000 audit[1880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffefe3fb20 a2=0 a3=0 items=0 ppid=1808 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.871000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:25:45.873000 audit[1882]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.873000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe97e2800 a2=0 a3=0 items=0 ppid=1808 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:25:45.875000 audit[1884]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.875000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff55d7c50 a2=0 a3=0 items=0 ppid=1808 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:25:45.877000 audit[1886]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.877000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffea0de660 a2=0 a3=0 items=0 ppid=1808 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.877000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:45.879000 audit[1888]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.879000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd5a940d0 a2=0 a3=0 items=0 ppid=1808 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:25:45.918000 audit[1918]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.918000 audit[1918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe0d38d40 a2=0 a3=0 items=0 ppid=1808 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:25:45.920000 audit[1920]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.920000 audit[1920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd5223480 a2=0 a3=0 items=0 ppid=1808 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:25:45.922000 audit[1922]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.922000 audit[1922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1d7f420 a2=0 a3=0 items=0 ppid=1808 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.922000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:25:45.925000 audit[1924]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.925000 audit[1924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3359a50 a2=0 a3=0 items=0 ppid=1808 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:25:45.927000 audit[1926]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.927000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe8ee5800 a2=0 a3=0 items=0 ppid=1808 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:25:45.929000 audit[1928]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.929000 audit[1928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe5f6d610 a2=0 a3=0 items=0 ppid=1808 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:45.931000 audit[1930]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.931000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff96feb10 a2=0 a3=0 items=0 ppid=1808 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:25:45.935000 audit[1932]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.935000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffe826c60 a2=0 a3=0 items=0 ppid=1808 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:25:45.938000 audit[1934]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.938000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff3bbffe0 a2=0 a3=0 items=0 ppid=1808 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:25:45.940000 audit[1936]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.940000 audit[1936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcc3060c0 a2=0 a3=0 items=0 ppid=1808 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:25:45.942000 audit[1938]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.942000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe3bd4480 a2=0 a3=0 items=0 ppid=1808 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:25:45.945000 audit[1940]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.945000 audit[1940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff884a710 a2=0 a3=0 items=0 ppid=1808 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:25:45.947000 audit[1942]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.947000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffd899720 a2=0 a3=0 items=0 ppid=1808 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:25:45.954000 audit[1947]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.954000 audit[1947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc124af30 a2=0 a3=0 items=0 ppid=1808 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:25:45.957000 audit[1949]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.957000 audit[1949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffeec799e0 a2=0 a3=0 items=0 ppid=1808 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:25:45.959000 audit[1951]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.959000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffa2f8f30 a2=0 a3=0 items=0 ppid=1808 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:25:45.961000 audit[1953]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.961000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe5a59ce0 a2=0 a3=0 items=0 ppid=1808 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:25:45.964000 audit[1955]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.964000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc4c00740 a2=0 a3=0 items=0 ppid=1808 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:25:45.966000 audit[1957]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:25:45.966000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffb72abb0 a2=0 a3=0 items=0 ppid=1808 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:25:45.983000 audit[1961]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.983000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffcfbadda0 a2=0 a3=0 items=0 ppid=1808 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:25:45.987000 audit[1963]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.987000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd1cc7010 a2=0 a3=0 items=0 ppid=1808 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:25:45.996000 audit[1971]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:45.996000 audit[1971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffcebd3850 a2=0 a3=0 items=0 ppid=1808 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:45.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:25:46.006000 audit[1977]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:46.006000 audit[1977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffde4064e0 a2=0 a3=0 items=0 ppid=1808 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:25:46.010000 audit[1979]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:46.010000 audit[1979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff129c3f0 a2=0 a3=0 items=0 ppid=1808 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:25:46.012000 audit[1981]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:46.012000 audit[1981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc295e3e0 a2=0 a3=0 items=0 ppid=1808 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.012000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:25:46.015000 audit[1983]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:46.015000 audit[1983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffeded4520 a2=0 a3=0 items=0 ppid=1808 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:25:46.017000 audit[1985]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:25:46.017000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffd0dbdc0 a2=0 a3=0 items=0 ppid=1808 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:25:46.017000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:25:46.019462 systemd-networkd[1494]: docker0: Link UP Dec 16 12:25:46.026016 dockerd[1808]: time="2025-12-16T12:25:46.025947796Z" level=info msg="Loading containers: done." Dec 16 12:25:46.051381 dockerd[1808]: time="2025-12-16T12:25:46.051264755Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:25:46.051566 dockerd[1808]: time="2025-12-16T12:25:46.051425464Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:25:46.051644 dockerd[1808]: time="2025-12-16T12:25:46.051606009Z" level=info msg="Initializing buildkit" Dec 16 12:25:46.083391 dockerd[1808]: time="2025-12-16T12:25:46.083262700Z" level=info msg="Completed buildkit initialization" Dec 16 12:25:46.090115 dockerd[1808]: time="2025-12-16T12:25:46.090063513Z" level=info msg="Daemon has completed initialization" Dec 16 12:25:46.090367 dockerd[1808]: time="2025-12-16T12:25:46.090226958Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:25:46.090418 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:25:46.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:46.662033 containerd[1582]: time="2025-12-16T12:25:46.661984472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:25:47.366989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357877215.mount: Deactivated successfully. Dec 16 12:25:48.322560 containerd[1582]: time="2025-12-16T12:25:48.322500969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:48.324180 containerd[1582]: time="2025-12-16T12:25:48.324096933Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974850" Dec 16 12:25:48.325513 containerd[1582]: time="2025-12-16T12:25:48.325477458Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:48.329403 containerd[1582]: time="2025-12-16T12:25:48.329310030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:48.330380 containerd[1582]: time="2025-12-16T12:25:48.330335995Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.668287364s" Dec 16 12:25:48.330380 containerd[1582]: time="2025-12-16T12:25:48.330381270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:25:48.331107 containerd[1582]: time="2025-12-16T12:25:48.331004987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:25:49.611622 containerd[1582]: time="2025-12-16T12:25:49.611560950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:49.612719 containerd[1582]: time="2025-12-16T12:25:49.612667617Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19130075" Dec 16 12:25:49.614088 containerd[1582]: time="2025-12-16T12:25:49.614021688Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:49.617067 containerd[1582]: time="2025-12-16T12:25:49.616914408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:49.618771 containerd[1582]: time="2025-12-16T12:25:49.618430146Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.287359502s" Dec 16 12:25:49.618771 containerd[1582]: time="2025-12-16T12:25:49.618477617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:25:49.618970 containerd[1582]: time="2025-12-16T12:25:49.618937937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:25:50.743872 containerd[1582]: time="2025-12-16T12:25:50.743695018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:50.747759 containerd[1582]: time="2025-12-16T12:25:50.747681936Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Dec 16 12:25:50.755682 containerd[1582]: time="2025-12-16T12:25:50.755581274Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:50.762105 containerd[1582]: time="2025-12-16T12:25:50.762014037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:50.763156 containerd[1582]: time="2025-12-16T12:25:50.763110080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.144134765s" Dec 16 12:25:50.763156 containerd[1582]: time="2025-12-16T12:25:50.763153157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:25:50.763687 containerd[1582]: time="2025-12-16T12:25:50.763634514Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:25:51.727029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:25:51.729493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:25:51.954214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:25:51.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:51.957970 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:25:51.958091 kernel: audit: type=1130 audit(1765887951.953:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:25:51.959517 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:25:52.012417 kubelet[2108]: E1216 12:25:52.010844 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:25:52.014203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:25:52.014379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:25:52.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:25:52.015148 systemd[1]: kubelet.service: Consumed 191ms CPU time, 107.1M memory peak. Dec 16 12:25:52.020086 kernel: audit: type=1131 audit(1765887952.014:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:25:52.127672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532202213.mount: Deactivated successfully. Dec 16 12:25:52.554614 containerd[1582]: time="2025-12-16T12:25:52.554457737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:52.557054 containerd[1582]: time="2025-12-16T12:25:52.556982793Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22801532" Dec 16 12:25:52.558128 containerd[1582]: time="2025-12-16T12:25:52.558058311Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:52.560207 containerd[1582]: time="2025-12-16T12:25:52.560131352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:52.560898 containerd[1582]: time="2025-12-16T12:25:52.560780156Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.797104621s" Dec 16 12:25:52.560949 containerd[1582]: time="2025-12-16T12:25:52.560908913Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:25:52.561604 containerd[1582]: time="2025-12-16T12:25:52.561565429Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:25:53.549135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742954285.mount: Deactivated successfully. Dec 16 12:25:54.812791 containerd[1582]: time="2025-12-16T12:25:54.812726074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:54.816886 containerd[1582]: time="2025-12-16T12:25:54.816811334Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=0" Dec 16 12:25:54.820460 containerd[1582]: time="2025-12-16T12:25:54.820393297Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:54.827045 containerd[1582]: time="2025-12-16T12:25:54.826948836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:54.828281 containerd[1582]: time="2025-12-16T12:25:54.828222492Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.266607471s" Dec 16 12:25:54.828548 containerd[1582]: time="2025-12-16T12:25:54.828422438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:25:54.829187 containerd[1582]: time="2025-12-16T12:25:54.828994979Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:25:55.307774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586857399.mount: Deactivated successfully. Dec 16 12:25:55.317496 containerd[1582]: time="2025-12-16T12:25:55.317387483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:55.318523 containerd[1582]: time="2025-12-16T12:25:55.318460515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 16 12:25:55.319936 containerd[1582]: time="2025-12-16T12:25:55.319903892Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:55.325531 containerd[1582]: time="2025-12-16T12:25:55.325467748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:55.326397 containerd[1582]: time="2025-12-16T12:25:55.326351674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 497.324664ms" Dec 16 12:25:55.326397 containerd[1582]: time="2025-12-16T12:25:55.326398308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:25:55.327047 containerd[1582]: time="2025-12-16T12:25:55.327003431Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:25:55.865801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130288188.mount: Deactivated successfully. Dec 16 12:25:58.019439 containerd[1582]: time="2025-12-16T12:25:58.019370177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:58.020845 containerd[1582]: time="2025-12-16T12:25:58.020568527Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Dec 16 12:25:58.021765 containerd[1582]: time="2025-12-16T12:25:58.021719989Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:58.024639 containerd[1582]: time="2025-12-16T12:25:58.024596879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:25:58.026002 containerd[1582]: time="2025-12-16T12:25:58.025802128Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.698756844s" Dec 16 12:25:58.026002 containerd[1582]: time="2025-12-16T12:25:58.025844484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:26:01.397305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:01.397493 systemd[1]: kubelet.service: Consumed 191ms CPU time, 107.1M memory peak. Dec 16 12:26:01.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:01.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:01.399695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:01.402202 kernel: audit: type=1130 audit(1765887961.396:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:01.402334 kernel: audit: type=1131 audit(1765887961.396:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:01.427603 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Dec 16 12:26:01.427622 systemd[1]: Reloading... Dec 16 12:26:01.517356 zram_generator::config[2309]: No configuration found. Dec 16 12:26:01.723372 systemd[1]: Reloading finished in 295 ms. Dec 16 12:26:01.753000 audit: BPF prog-id=61 op=LOAD Dec 16 12:26:01.753000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:26:01.754960 kernel: audit: type=1334 audit(1765887961.753:282): prog-id=61 op=LOAD Dec 16 12:26:01.755016 kernel: audit: type=1334 audit(1765887961.753:283): prog-id=56 op=UNLOAD Dec 16 12:26:01.755036 kernel: audit: type=1334 audit(1765887961.753:284): prog-id=62 op=LOAD Dec 16 12:26:01.753000 audit: BPF prog-id=62 op=LOAD Dec 16 12:26:01.755615 kernel: audit: type=1334 audit(1765887961.754:285): prog-id=63 op=LOAD Dec 16 12:26:01.754000 audit: BPF prog-id=63 op=LOAD Dec 16 12:26:01.754000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:26:01.754000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:26:01.755000 audit: BPF prog-id=64 op=LOAD Dec 16 12:26:01.757053 kernel: audit: type=1334 audit(1765887961.754:286): prog-id=54 op=UNLOAD Dec 16 12:26:01.757103 kernel: audit: type=1334 audit(1765887961.754:287): prog-id=55 op=UNLOAD Dec 16 12:26:01.757129 kernel: audit: type=1334 audit(1765887961.755:288): prog-id=64 op=LOAD Dec 16 12:26:01.755000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:26:01.757000 audit: BPF prog-id=65 op=LOAD Dec 16 12:26:01.758000 audit: BPF prog-id=66 op=LOAD Dec 16 12:26:01.758000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:26:01.758000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:26:01.759354 kernel: audit: type=1334 audit(1765887961.755:289): prog-id=58 op=UNLOAD Dec 16 12:26:01.765000 audit: BPF prog-id=67 op=LOAD Dec 16 12:26:01.765000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:26:01.765000 audit: BPF prog-id=68 op=LOAD Dec 16 12:26:01.765000 audit: BPF prog-id=69 op=LOAD Dec 16 12:26:01.765000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:26:01.765000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:26:01.766000 audit: BPF prog-id=70 op=LOAD Dec 16 12:26:01.766000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:26:01.766000 audit: BPF prog-id=71 op=LOAD Dec 16 12:26:01.766000 audit: BPF prog-id=72 op=LOAD Dec 16 12:26:01.766000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:26:01.766000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:26:01.766000 audit: BPF prog-id=73 op=LOAD Dec 16 12:26:01.766000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:26:01.766000 audit: BPF prog-id=74 op=LOAD Dec 16 12:26:01.766000 audit: BPF prog-id=75 op=LOAD Dec 16 12:26:01.766000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:26:01.766000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:26:01.767000 audit: BPF prog-id=76 op=LOAD Dec 16 12:26:01.767000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:26:01.767000 audit: BPF prog-id=77 op=LOAD Dec 16 12:26:01.767000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:26:01.768000 audit: BPF prog-id=78 op=LOAD Dec 16 12:26:01.768000 audit: BPF prog-id=79 op=LOAD Dec 16 12:26:01.768000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:26:01.768000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:26:01.768000 audit: BPF prog-id=80 op=LOAD Dec 16 12:26:01.768000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:26:01.797035 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:26:01.797131 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:26:01.797514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:01.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:26:01.797583 systemd[1]: kubelet.service: Consumed 106ms CPU time, 95M memory peak. Dec 16 12:26:01.799307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:01.933755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:01.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:01.936487 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:26:01.975674 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:26:01.975674 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:01.975674 kubelet[2351]: I1216 12:26:01.975628 2351 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:26:03.132991 kubelet[2351]: I1216 12:26:03.132945 2351 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:26:03.132991 kubelet[2351]: I1216 12:26:03.132984 2351 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:26:03.133380 kubelet[2351]: I1216 12:26:03.133025 2351 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:26:03.133380 kubelet[2351]: I1216 12:26:03.133034 2351 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:26:03.133499 kubelet[2351]: I1216 12:26:03.133479 2351 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:26:03.141179 kubelet[2351]: E1216 12:26:03.141095 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:26:03.142139 kubelet[2351]: I1216 12:26:03.141591 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:26:03.151330 kubelet[2351]: I1216 12:26:03.151233 2351 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:26:03.154564 kubelet[2351]: I1216 12:26:03.153782 2351 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:26:03.154564 kubelet[2351]: I1216 12:26:03.154018 2351 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:26:03.154564 kubelet[2351]: I1216 12:26:03.154049 2351 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:26:03.154564 kubelet[2351]: I1216 12:26:03.154213 2351 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:26:03.154786 kubelet[2351]: I1216 12:26:03.154225 2351 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:26:03.154786 kubelet[2351]: I1216 12:26:03.154367 2351 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:26:03.157553 kubelet[2351]: I1216 12:26:03.157522 2351 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:03.160027 kubelet[2351]: I1216 12:26:03.159997 2351 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:26:03.160142 kubelet[2351]: I1216 12:26:03.160131 2351 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:26:03.160697 kubelet[2351]: I1216 12:26:03.160676 2351 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:26:03.160777 kubelet[2351]: I1216 12:26:03.160768 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:26:03.161435 kubelet[2351]: E1216 12:26:03.160667 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:26:03.161993 kubelet[2351]: E1216 12:26:03.161966 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:26:03.162467 kubelet[2351]: I1216 12:26:03.162443 2351 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:26:03.163121 kubelet[2351]: I1216 12:26:03.163100 2351 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:26:03.163170 kubelet[2351]: I1216 12:26:03.163136 2351 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:26:03.163202 kubelet[2351]: W1216 12:26:03.163188 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:26:03.169419 kubelet[2351]: I1216 12:26:03.168700 2351 server.go:1262] "Started kubelet" Dec 16 12:26:03.169554 kubelet[2351]: I1216 12:26:03.169487 2351 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:26:03.169993 kubelet[2351]: I1216 12:26:03.169947 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:26:03.171186 kubelet[2351]: I1216 12:26:03.170716 2351 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:26:03.171186 kubelet[2351]: I1216 12:26:03.170933 2351 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:26:03.171186 kubelet[2351]: I1216 12:26:03.171012 2351 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:26:03.171657 kubelet[2351]: I1216 12:26:03.171619 2351 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:26:03.173345 kubelet[2351]: I1216 12:26:03.172778 2351 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:26:03.173345 kubelet[2351]: E1216 12:26:03.173231 2351 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:26:03.174459 kubelet[2351]: I1216 12:26:03.174425 2351 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:26:03.174725 kubelet[2351]: I1216 12:26:03.174710 2351 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:26:03.177336 kubelet[2351]: I1216 12:26:03.177282 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:26:03.177856 kubelet[2351]: E1216 12:26:03.176496 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b1bf15ce4018 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:26:03.168653336 +0000 UTC m=+1.228634944,LastTimestamp:2025-12-16 12:26:03.168653336 +0000 UTC m=+1.228634944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:26:03.178201 kubelet[2351]: E1216 12:26:03.178167 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:26:03.178416 kubelet[2351]: E1216 12:26:03.178379 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Dec 16 12:26:03.178872 kubelet[2351]: I1216 12:26:03.178835 2351 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:26:03.179033 kubelet[2351]: I1216 12:26:03.178977 2351 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:26:03.180748 kubelet[2351]: I1216 12:26:03.180721 2351 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:26:03.184962 kubelet[2351]: E1216 12:26:03.184571 2351 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:26:03.184000 audit[2370]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.184000 audit[2370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe8cd1320 a2=0 a3=0 items=0 ppid=2351 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:26:03.185000 audit[2371]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.185000 audit[2371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7c76020 a2=0 a3=0 items=0 ppid=2351 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:26:03.188000 audit[2374]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.188000 audit[2374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffca0f0bb0 a2=0 a3=0 items=0 ppid=2351 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:26:03.191000 audit[2376]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.191000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe909a680 a2=0 a3=0 items=0 ppid=2351 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:26:03.198310 kubelet[2351]: I1216 12:26:03.198074 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:26:03.198310 kubelet[2351]: I1216 12:26:03.198090 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:26:03.198310 kubelet[2351]: I1216 12:26:03.198112 2351 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:03.201387 kubelet[2351]: I1216 12:26:03.201124 2351 policy_none.go:49] "None policy: Start" Dec 16 12:26:03.201387 kubelet[2351]: I1216 12:26:03.201155 2351 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:26:03.201387 kubelet[2351]: I1216 12:26:03.201175 2351 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:26:03.203450 kubelet[2351]: I1216 12:26:03.203425 2351 policy_none.go:47] "Start" Dec 16 12:26:03.202000 audit[2381]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.202000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffb196f90 a2=0 a3=0 items=0 ppid=2351 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 16 12:26:03.205307 kubelet[2351]: I1216 12:26:03.205185 2351 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:26:03.205000 audit[2383]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:03.205000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd1f939c0 a2=0 a3=0 items=0 ppid=2351 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:26:03.207559 kubelet[2351]: I1216 12:26:03.207535 2351 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:26:03.207559 kubelet[2351]: I1216 12:26:03.207559 2351 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:26:03.206000 audit[2384]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.206000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe49e0f0 a2=0 a3=0 items=0 ppid=2351 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:26:03.208000 audit[2385]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.208000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf06f5d0 a2=0 a3=0 items=0 ppid=2351 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:26:03.209000 audit[2387]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:03.209000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc758fd10 a2=0 a3=0 items=0 ppid=2351 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:26:03.210000 audit[2386]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:03.210000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd637a760 a2=0 a3=0 items=0 ppid=2351 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:26:03.212000 audit[2388]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:03.212000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedb8ed80 a2=0 a3=0 items=0 ppid=2351 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:26:03.210954 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:26:03.214000 audit[2390]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:03.214000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd91311e0 a2=0 a3=0 items=0 ppid=2351 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:03.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:26:03.214624 kubelet[2351]: I1216 12:26:03.207594 2351 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:26:03.214624 kubelet[2351]: E1216 12:26:03.207646 2351 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:26:03.214624 kubelet[2351]: E1216 12:26:03.208338 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:26:03.225120 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:26:03.230900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:26:03.246563 kubelet[2351]: E1216 12:26:03.246529 2351 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:26:03.247159 kubelet[2351]: I1216 12:26:03.247143 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:26:03.247314 kubelet[2351]: I1216 12:26:03.247215 2351 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:26:03.247478 kubelet[2351]: I1216 12:26:03.247457 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:26:03.248830 kubelet[2351]: E1216 12:26:03.248798 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:26:03.248927 kubelet[2351]: E1216 12:26:03.248851 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:26:03.349759 kubelet[2351]: I1216 12:26:03.349724 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:26:03.350237 kubelet[2351]: E1216 12:26:03.350201 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Dec 16 12:26:03.356862 systemd[1]: Created slice kubepods-burstable-pod0ec0cfc85455d03f3ca06b5d578a8d17.slice - libcontainer container kubepods-burstable-pod0ec0cfc85455d03f3ca06b5d578a8d17.slice. Dec 16 12:26:03.379532 kubelet[2351]: E1216 12:26:03.379493 2351 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:26:03.379928 kubelet[2351]: E1216 12:26:03.379888 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Dec 16 12:26:03.382772 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 16 12:26:03.395172 kubelet[2351]: E1216 12:26:03.394178 2351 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:26:03.400031 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 16 12:26:03.404675 kubelet[2351]: E1216 12:26:03.404645 2351 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:26:03.475956 kubelet[2351]: I1216 12:26:03.475859 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:03.475956 kubelet[2351]: I1216 12:26:03.475917 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:03.476313 kubelet[2351]: I1216 12:26:03.476164 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:03.476313 kubelet[2351]: I1216 12:26:03.476195 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec0cfc85455d03f3ca06b5d578a8d17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ec0cfc85455d03f3ca06b5d578a8d17\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:03.476313 kubelet[2351]: I1216 12:26:03.476213 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:03.476313 kubelet[2351]: I1216 12:26:03.476244 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:03.476313 kubelet[2351]: I1216 12:26:03.476260 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:03.476585 kubelet[2351]: I1216 12:26:03.476276 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec0cfc85455d03f3ca06b5d578a8d17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ec0cfc85455d03f3ca06b5d578a8d17\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:03.476585 kubelet[2351]: I1216 12:26:03.476294 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec0cfc85455d03f3ca06b5d578a8d17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ec0cfc85455d03f3ca06b5d578a8d17\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:03.553279 kubelet[2351]: I1216 12:26:03.552891 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:26:03.553279 kubelet[2351]: E1216 12:26:03.553241 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Dec 16 12:26:03.682694 kubelet[2351]: E1216 12:26:03.682552 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:03.683744 containerd[1582]: time="2025-12-16T12:26:03.683702187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ec0cfc85455d03f3ca06b5d578a8d17,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:03.698756 kubelet[2351]: E1216 12:26:03.698696 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:03.699417 containerd[1582]: time="2025-12-16T12:26:03.699341237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:03.707839 kubelet[2351]: E1216 12:26:03.707779 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:03.708580 containerd[1582]: time="2025-12-16T12:26:03.708511653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:03.781116 kubelet[2351]: E1216 12:26:03.781066 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Dec 16 12:26:03.956310 kubelet[2351]: I1216 12:26:03.955839 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:26:03.956583 kubelet[2351]: E1216 12:26:03.956554 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Dec 16 12:26:04.052134 kubelet[2351]: E1216 12:26:04.052084 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:26:04.232957 kubelet[2351]: E1216 12:26:04.231119 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:26:04.232490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172932147.mount: Deactivated successfully. Dec 16 12:26:04.241011 containerd[1582]: time="2025-12-16T12:26:04.240930823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:04.250407 kubelet[2351]: E1216 12:26:04.250220 2351 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:26:04.250945 containerd[1582]: time="2025-12-16T12:26:04.250863602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:26:04.252686 containerd[1582]: time="2025-12-16T12:26:04.252642100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:04.255382 containerd[1582]: time="2025-12-16T12:26:04.255310367Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:04.256582 containerd[1582]: time="2025-12-16T12:26:04.256405438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:26:04.259706 containerd[1582]: time="2025-12-16T12:26:04.259656787Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:04.263363 containerd[1582]: time="2025-12-16T12:26:04.262556204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:26:04.264802 containerd[1582]: time="2025-12-16T12:26:04.264747627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:04.269288 containerd[1582]: time="2025-12-16T12:26:04.266682816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.391669ms" Dec 16 12:26:04.269288 containerd[1582]: time="2025-12-16T12:26:04.268370545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.709225ms" Dec 16 12:26:04.269288 containerd[1582]: time="2025-12-16T12:26:04.269003639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 567.681125ms" Dec 16 12:26:04.314031 containerd[1582]: time="2025-12-16T12:26:04.313984248Z" level=info msg="connecting to shim 096e085c3af9ca56ead150674e7e983e7d824d5c23a374aa87340a57d78a119a" address="unix:///run/containerd/s/bd055fc228ef4cfa546a9cf8c12bab59c4d01cb251a18285a4b854c8dcdb5d33" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:04.320599 containerd[1582]: time="2025-12-16T12:26:04.320537039Z" level=info msg="connecting to shim c2b46bd2a83cf522084bdeada324a743ca5dc4a13e714ee77772896aa0341e0b" address="unix:///run/containerd/s/872002af7dfc7f312debb3c49f570e127ad0647c897fe63077217419f7acb5b3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:04.320800 containerd[1582]: time="2025-12-16T12:26:04.320778166Z" level=info msg="connecting to shim 4ee5b3236e66880473f086de6146a7cd8abf8776ee53abb34d5dbf51dd4220ac" address="unix:///run/containerd/s/b7a7a8dfe931189da04caaf4ec4e0da8eec622ea2756b4635ea8ddb0eea79a10" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:04.345800 systemd[1]: Started cri-containerd-096e085c3af9ca56ead150674e7e983e7d824d5c23a374aa87340a57d78a119a.scope - libcontainer container 096e085c3af9ca56ead150674e7e983e7d824d5c23a374aa87340a57d78a119a. Dec 16 12:26:04.350574 systemd[1]: Started cri-containerd-c2b46bd2a83cf522084bdeada324a743ca5dc4a13e714ee77772896aa0341e0b.scope - libcontainer container c2b46bd2a83cf522084bdeada324a743ca5dc4a13e714ee77772896aa0341e0b. Dec 16 12:26:04.354381 systemd[1]: Started cri-containerd-4ee5b3236e66880473f086de6146a7cd8abf8776ee53abb34d5dbf51dd4220ac.scope - libcontainer container 4ee5b3236e66880473f086de6146a7cd8abf8776ee53abb34d5dbf51dd4220ac. Dec 16 12:26:04.364000 audit: BPF prog-id=81 op=LOAD Dec 16 12:26:04.365000 audit: BPF prog-id=82 op=LOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.365000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.365000 audit: BPF prog-id=83 op=LOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.365000 audit: BPF prog-id=84 op=LOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.365000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.365000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.365000 audit: BPF prog-id=85 op=LOAD Dec 16 12:26:04.365000 audit[2458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2425 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465653562333233366536363838303437336630383664653631343661 Dec 16 12:26:04.370000 audit: BPF prog-id=86 op=LOAD Dec 16 12:26:04.371000 audit: BPF prog-id=87 op=LOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.371000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.371000 audit: BPF prog-id=88 op=LOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.371000 audit: BPF prog-id=89 op=LOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.371000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.371000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.371000 audit: BPF prog-id=90 op=LOAD Dec 16 12:26:04.371000 audit[2461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2429 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332623436626432613833636635323230383462646561646133323461 Dec 16 12:26:04.372000 audit: BPF prog-id=91 op=LOAD Dec 16 12:26:04.373000 audit: BPF prog-id=92 op=LOAD Dec 16 12:26:04.373000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.374000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:26:04.374000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.374000 audit: BPF prog-id=93 op=LOAD Dec 16 12:26:04.374000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.375000 audit: BPF prog-id=94 op=LOAD Dec 16 12:26:04.375000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.375000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:26:04.375000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.375000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:26:04.375000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.375000 audit: BPF prog-id=95 op=LOAD Dec 16 12:26:04.375000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2410 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039366530383563336166396361353665616431353036373465376539 Dec 16 12:26:04.408501 containerd[1582]: time="2025-12-16T12:26:04.407002974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ee5b3236e66880473f086de6146a7cd8abf8776ee53abb34d5dbf51dd4220ac\"" Dec 16 12:26:04.410438 kubelet[2351]: E1216 12:26:04.410405 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:04.412113 containerd[1582]: time="2025-12-16T12:26:04.411980805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ec0cfc85455d03f3ca06b5d578a8d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2b46bd2a83cf522084bdeada324a743ca5dc4a13e714ee77772896aa0341e0b\"" Dec 16 12:26:04.413703 kubelet[2351]: E1216 12:26:04.413563 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:04.417345 containerd[1582]: time="2025-12-16T12:26:04.417265364Z" level=info msg="CreateContainer within sandbox \"4ee5b3236e66880473f086de6146a7cd8abf8776ee53abb34d5dbf51dd4220ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:26:04.418211 containerd[1582]: time="2025-12-16T12:26:04.418159101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"096e085c3af9ca56ead150674e7e983e7d824d5c23a374aa87340a57d78a119a\"" Dec 16 12:26:04.419246 kubelet[2351]: E1216 12:26:04.419189 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:04.419419 containerd[1582]: time="2025-12-16T12:26:04.419341694Z" level=info msg="CreateContainer within sandbox \"c2b46bd2a83cf522084bdeada324a743ca5dc4a13e714ee77772896aa0341e0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:26:04.426434 containerd[1582]: time="2025-12-16T12:26:04.426368123Z" level=info msg="Container c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:04.436757 containerd[1582]: time="2025-12-16T12:26:04.436579058Z" level=info msg="Container a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:04.437970 containerd[1582]: time="2025-12-16T12:26:04.437923791Z" level=info msg="CreateContainer within sandbox \"096e085c3af9ca56ead150674e7e983e7d824d5c23a374aa87340a57d78a119a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:26:04.442096 containerd[1582]: time="2025-12-16T12:26:04.442038301Z" level=info msg="CreateContainer within sandbox \"4ee5b3236e66880473f086de6146a7cd8abf8776ee53abb34d5dbf51dd4220ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b\"" Dec 16 12:26:04.443057 containerd[1582]: time="2025-12-16T12:26:04.443013549Z" level=info msg="StartContainer for \"c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b\"" Dec 16 12:26:04.444551 containerd[1582]: time="2025-12-16T12:26:04.444508441Z" level=info msg="connecting to shim c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b" address="unix:///run/containerd/s/b7a7a8dfe931189da04caaf4ec4e0da8eec622ea2756b4635ea8ddb0eea79a10" protocol=ttrpc version=3 Dec 16 12:26:04.449364 containerd[1582]: time="2025-12-16T12:26:04.448504852Z" level=info msg="CreateContainer within sandbox \"c2b46bd2a83cf522084bdeada324a743ca5dc4a13e714ee77772896aa0341e0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6\"" Dec 16 12:26:04.449484 containerd[1582]: time="2025-12-16T12:26:04.449367091Z" level=info msg="StartContainer for \"a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6\"" Dec 16 12:26:04.450883 containerd[1582]: time="2025-12-16T12:26:04.450811650Z" level=info msg="connecting to shim a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6" address="unix:///run/containerd/s/872002af7dfc7f312debb3c49f570e127ad0647c897fe63077217419f7acb5b3" protocol=ttrpc version=3 Dec 16 12:26:04.454435 containerd[1582]: time="2025-12-16T12:26:04.454374336Z" level=info msg="Container 814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:04.465594 systemd[1]: Started cri-containerd-c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b.scope - libcontainer container c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b. Dec 16 12:26:04.470033 systemd[1]: Started cri-containerd-a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6.scope - libcontainer container a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6. Dec 16 12:26:04.471061 containerd[1582]: time="2025-12-16T12:26:04.471016957Z" level=info msg="CreateContainer within sandbox \"096e085c3af9ca56ead150674e7e983e7d824d5c23a374aa87340a57d78a119a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5\"" Dec 16 12:26:04.472097 containerd[1582]: time="2025-12-16T12:26:04.472063618Z" level=info msg="StartContainer for \"814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5\"" Dec 16 12:26:04.474370 containerd[1582]: time="2025-12-16T12:26:04.474069938Z" level=info msg="connecting to shim 814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5" address="unix:///run/containerd/s/bd055fc228ef4cfa546a9cf8c12bab59c4d01cb251a18285a4b854c8dcdb5d33" protocol=ttrpc version=3 Dec 16 12:26:04.481000 audit: BPF prog-id=96 op=LOAD Dec 16 12:26:04.481000 audit: BPF prog-id=97 op=LOAD Dec 16 12:26:04.481000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.481000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:26:04.481000 audit[2536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.481000 audit: BPF prog-id=98 op=LOAD Dec 16 12:26:04.481000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.481000 audit: BPF prog-id=99 op=LOAD Dec 16 12:26:04.481000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.482000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:26:04.482000 audit[2536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.482000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:26:04.482000 audit[2536]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.482000 audit: BPF prog-id=100 op=LOAD Dec 16 12:26:04.482000 audit[2536]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2425 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334373338623839656337356663363861373430373664653865323764 Dec 16 12:26:04.488000 audit: BPF prog-id=101 op=LOAD Dec 16 12:26:04.489000 audit: BPF prog-id=102 op=LOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.489000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.489000 audit: BPF prog-id=103 op=LOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.489000 audit: BPF prog-id=104 op=LOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.489000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.489000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.489000 audit: BPF prog-id=105 op=LOAD Dec 16 12:26:04.489000 audit[2542]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2429 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132383038623230376235666631666130353130363939363039316365 Dec 16 12:26:04.502671 systemd[1]: Started cri-containerd-814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5.scope - libcontainer container 814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5. Dec 16 12:26:04.527000 audit: BPF prog-id=106 op=LOAD Dec 16 12:26:04.527000 audit: BPF prog-id=107 op=LOAD Dec 16 12:26:04.527000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.528000 audit: BPF prog-id=107 op=UNLOAD Dec 16 12:26:04.528000 audit[2574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.528000 audit: BPF prog-id=108 op=LOAD Dec 16 12:26:04.528000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.528000 audit: BPF prog-id=109 op=LOAD Dec 16 12:26:04.528000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.528000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:26:04.528000 audit[2574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.528000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:26:04.528000 audit[2574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.528000 audit: BPF prog-id=110 op=LOAD Dec 16 12:26:04.528000 audit[2574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2410 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:04.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346366393661643861306138633531313366616139643866663431 Dec 16 12:26:04.538310 containerd[1582]: time="2025-12-16T12:26:04.538092416Z" level=info msg="StartContainer for \"c4738b89ec75fc68a74076de8e27d69970221b06c563eb1b81b3dd1ff6ce666b\" returns successfully" Dec 16 12:26:04.542702 containerd[1582]: time="2025-12-16T12:26:04.542665055Z" level=info msg="StartContainer for \"a2808b207b5ff1fa05106996091ceb361d3ac064fa07859057a3cf29753a3df6\" returns successfully" Dec 16 12:26:04.566644 containerd[1582]: time="2025-12-16T12:26:04.566459137Z" level=info msg="StartContainer for \"814cf96ad8a0a8c5113faa9d8ff41b7d800108f37812ca9f7cc1d8f3e4f686e5\" returns successfully" Dec 16 12:26:04.581862 kubelet[2351]: E1216 12:26:04.581804 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Dec 16 12:26:04.758018 kubelet[2351]: I1216 12:26:04.757899 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:26:05.224466 kubelet[2351]: E1216 12:26:05.222125 2351 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:26:05.224466 kubelet[2351]: E1216 12:26:05.222346 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:05.225044 kubelet[2351]: E1216 12:26:05.224797 2351 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:26:05.225044 kubelet[2351]: E1216 12:26:05.224959 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:05.228041 kubelet[2351]: E1216 12:26:05.228007 2351 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:26:05.228516 kubelet[2351]: E1216 12:26:05.228494 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:06.163357 kubelet[2351]: I1216 12:26:06.162782 2351 apiserver.go:52] "Watching apiserver" Dec 16 12:26:06.167971 kubelet[2351]: I1216 12:26:06.167924 2351 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:26:06.174194 kubelet[2351]: I1216 12:26:06.174136 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:06.174853 kubelet[2351]: I1216 12:26:06.174824 2351 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:26:06.230027 kubelet[2351]: I1216 12:26:06.229805 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:06.230027 kubelet[2351]: I1216 12:26:06.229898 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:06.230027 kubelet[2351]: I1216 12:26:06.229967 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.243902 2351 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.243903 2351 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.243982 2351 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.244106 2351 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:06.245357 kubelet[2351]: I1216 12:26:06.244119 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.244151 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.244278 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:06.245357 kubelet[2351]: E1216 12:26:06.244411 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:06.248692 kubelet[2351]: E1216 12:26:06.248654 2351 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:06.248692 kubelet[2351]: I1216 12:26:06.248685 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:06.252417 kubelet[2351]: E1216 12:26:06.252367 2351 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:07.232132 kubelet[2351]: I1216 12:26:07.232101 2351 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:07.238465 kubelet[2351]: E1216 12:26:07.238420 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:08.233578 kubelet[2351]: E1216 12:26:08.233541 2351 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:08.401732 systemd[1]: Reload requested from client PID 2643 ('systemctl') (unit session-7.scope)... Dec 16 12:26:08.401753 systemd[1]: Reloading... Dec 16 12:26:08.506467 zram_generator::config[2692]: No configuration found. Dec 16 12:26:08.694540 systemd[1]: Reloading finished in 292 ms. Dec 16 12:26:08.718662 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:08.731406 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:26:08.731771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:08.731867 systemd[1]: kubelet.service: Consumed 1.563s CPU time, 121.1M memory peak. Dec 16 12:26:08.733547 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:26:08.733632 kernel: audit: type=1131 audit(1765887968.730:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:08.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:08.733996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:08.733000 audit: BPF prog-id=111 op=LOAD Dec 16 12:26:08.735923 kernel: audit: type=1334 audit(1765887968.733:385): prog-id=111 op=LOAD Dec 16 12:26:08.735978 kernel: audit: type=1334 audit(1765887968.733:386): prog-id=80 op=UNLOAD Dec 16 12:26:08.733000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:26:08.735000 audit: BPF prog-id=112 op=LOAD Dec 16 12:26:08.735000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:26:08.738909 kernel: audit: type=1334 audit(1765887968.735:387): prog-id=112 op=LOAD Dec 16 12:26:08.738948 kernel: audit: type=1334 audit(1765887968.735:388): prog-id=73 op=UNLOAD Dec 16 12:26:08.738969 kernel: audit: type=1334 audit(1765887968.736:389): prog-id=113 op=LOAD Dec 16 12:26:08.736000 audit: BPF prog-id=113 op=LOAD Dec 16 12:26:08.737000 audit: BPF prog-id=114 op=LOAD Dec 16 12:26:08.740480 kernel: audit: type=1334 audit(1765887968.737:390): prog-id=114 op=LOAD Dec 16 12:26:08.740515 kernel: audit: type=1334 audit(1765887968.737:391): prog-id=74 op=UNLOAD Dec 16 12:26:08.737000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:26:08.737000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:26:08.742000 kernel: audit: type=1334 audit(1765887968.737:392): prog-id=75 op=UNLOAD Dec 16 12:26:08.742043 kernel: audit: type=1334 audit(1765887968.738:393): prog-id=115 op=LOAD Dec 16 12:26:08.738000 audit: BPF prog-id=115 op=LOAD Dec 16 12:26:08.738000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:26:08.739000 audit: BPF prog-id=116 op=LOAD Dec 16 12:26:08.742000 audit: BPF prog-id=117 op=LOAD Dec 16 12:26:08.742000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:26:08.742000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:26:08.742000 audit: BPF prog-id=118 op=LOAD Dec 16 12:26:08.742000 audit: BPF prog-id=119 op=LOAD Dec 16 12:26:08.742000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:26:08.742000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:26:08.742000 audit: BPF prog-id=120 op=LOAD Dec 16 12:26:08.743000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:26:08.743000 audit: BPF prog-id=121 op=LOAD Dec 16 12:26:08.743000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:26:08.744000 audit: BPF prog-id=122 op=LOAD Dec 16 12:26:08.744000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:26:08.744000 audit: BPF prog-id=123 op=LOAD Dec 16 12:26:08.744000 audit: BPF prog-id=124 op=LOAD Dec 16 12:26:08.744000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:26:08.744000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:26:08.745000 audit: BPF prog-id=125 op=LOAD Dec 16 12:26:08.745000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:26:08.745000 audit: BPF prog-id=126 op=LOAD Dec 16 12:26:08.745000 audit: BPF prog-id=127 op=LOAD Dec 16 12:26:08.746000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:26:08.746000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:26:08.746000 audit: BPF prog-id=128 op=LOAD Dec 16 12:26:08.746000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:26:08.746000 audit: BPF prog-id=129 op=LOAD Dec 16 12:26:08.746000 audit: BPF prog-id=130 op=LOAD Dec 16 12:26:08.746000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:26:08.746000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:26:08.886223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:08.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:08.907889 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:26:08.956606 kubelet[2731]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:26:08.956606 kubelet[2731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:08.956606 kubelet[2731]: I1216 12:26:08.956553 2731 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:26:08.968406 kubelet[2731]: I1216 12:26:08.967762 2731 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:26:08.968406 kubelet[2731]: I1216 12:26:08.967794 2731 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:26:08.968406 kubelet[2731]: I1216 12:26:08.967825 2731 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:26:08.968406 kubelet[2731]: I1216 12:26:08.967831 2731 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:26:08.968406 kubelet[2731]: I1216 12:26:08.968097 2731 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:26:08.969491 kubelet[2731]: I1216 12:26:08.969459 2731 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:26:08.974083 kubelet[2731]: I1216 12:26:08.973891 2731 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:26:08.979133 kubelet[2731]: I1216 12:26:08.979099 2731 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:26:08.983054 kubelet[2731]: I1216 12:26:08.982998 2731 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:26:08.983298 kubelet[2731]: I1216 12:26:08.983249 2731 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:26:08.983754 kubelet[2731]: I1216 12:26:08.983284 2731 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:26:08.983754 kubelet[2731]: I1216 12:26:08.983482 2731 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:26:08.983754 kubelet[2731]: I1216 12:26:08.983496 2731 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:26:08.983754 kubelet[2731]: I1216 12:26:08.983526 2731 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:26:08.984844 kubelet[2731]: I1216 12:26:08.984794 2731 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:08.985171 kubelet[2731]: I1216 12:26:08.985153 2731 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:26:08.985236 kubelet[2731]: I1216 12:26:08.985177 2731 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:26:08.985236 kubelet[2731]: I1216 12:26:08.985208 2731 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:26:08.985236 kubelet[2731]: I1216 12:26:08.985219 2731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:26:08.988801 kubelet[2731]: I1216 12:26:08.986638 2731 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:26:08.988801 kubelet[2731]: I1216 12:26:08.987233 2731 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:26:08.988801 kubelet[2731]: I1216 12:26:08.987262 2731 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:26:08.990779 kubelet[2731]: I1216 12:26:08.990757 2731 server.go:1262] "Started kubelet" Dec 16 12:26:08.991827 kubelet[2731]: I1216 12:26:08.991242 2731 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:26:08.991827 kubelet[2731]: I1216 12:26:08.991351 2731 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:26:08.991827 kubelet[2731]: I1216 12:26:08.991614 2731 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:26:08.991827 kubelet[2731]: I1216 12:26:08.991619 2731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:26:08.996341 kubelet[2731]: I1216 12:26:08.992287 2731 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:26:08.996341 kubelet[2731]: I1216 12:26:08.992411 2731 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:26:08.996341 kubelet[2731]: E1216 12:26:08.992886 2731 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:26:08.996341 kubelet[2731]: I1216 12:26:08.993261 2731 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:26:08.999068 kubelet[2731]: I1216 12:26:08.999012 2731 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:26:08.999278 kubelet[2731]: I1216 12:26:08.999250 2731 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:26:08.999466 kubelet[2731]: I1216 12:26:08.999450 2731 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:26:09.000784 kubelet[2731]: I1216 12:26:09.000722 2731 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:26:09.007602 kubelet[2731]: I1216 12:26:09.007573 2731 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:26:09.007602 kubelet[2731]: I1216 12:26:09.007596 2731 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:26:09.016374 kubelet[2731]: E1216 12:26:09.016270 2731 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:26:09.026637 kubelet[2731]: I1216 12:26:09.025613 2731 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:26:09.028048 kubelet[2731]: I1216 12:26:09.027721 2731 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:26:09.028048 kubelet[2731]: I1216 12:26:09.027753 2731 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:26:09.028048 kubelet[2731]: I1216 12:26:09.027775 2731 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:26:09.029446 kubelet[2731]: E1216 12:26:09.027835 2731 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:26:09.062535 kubelet[2731]: I1216 12:26:09.062504 2731 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:26:09.062535 kubelet[2731]: I1216 12:26:09.062525 2731 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:26:09.062535 kubelet[2731]: I1216 12:26:09.062549 2731 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:09.062948 kubelet[2731]: I1216 12:26:09.062781 2731 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:26:09.062948 kubelet[2731]: I1216 12:26:09.062793 2731 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:26:09.062948 kubelet[2731]: I1216 12:26:09.062810 2731 policy_none.go:49] "None policy: Start" Dec 16 12:26:09.062948 kubelet[2731]: I1216 12:26:09.062819 2731 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:26:09.062948 kubelet[2731]: I1216 12:26:09.062828 2731 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:26:09.064864 kubelet[2731]: I1216 12:26:09.062996 2731 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:26:09.064864 kubelet[2731]: I1216 12:26:09.063009 2731 policy_none.go:47] "Start" Dec 16 12:26:09.068233 kubelet[2731]: E1216 12:26:09.068116 2731 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:26:09.068426 kubelet[2731]: I1216 12:26:09.068355 2731 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:26:09.068474 kubelet[2731]: I1216 12:26:09.068432 2731 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:26:09.068749 kubelet[2731]: I1216 12:26:09.068730 2731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:26:09.071842 kubelet[2731]: E1216 12:26:09.071807 2731 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:26:09.131063 kubelet[2731]: I1216 12:26:09.130937 2731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:09.131063 kubelet[2731]: I1216 12:26:09.130937 2731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:09.131063 kubelet[2731]: I1216 12:26:09.131018 2731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:09.140937 kubelet[2731]: E1216 12:26:09.140894 2731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:09.173137 kubelet[2731]: I1216 12:26:09.173020 2731 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:26:09.182495 kubelet[2731]: I1216 12:26:09.182380 2731 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:26:09.182495 kubelet[2731]: I1216 12:26:09.182488 2731 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:26:09.301101 kubelet[2731]: I1216 12:26:09.300813 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ec0cfc85455d03f3ca06b5d578a8d17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ec0cfc85455d03f3ca06b5d578a8d17\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:09.301101 kubelet[2731]: I1216 12:26:09.301019 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ec0cfc85455d03f3ca06b5d578a8d17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ec0cfc85455d03f3ca06b5d578a8d17\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:09.301101 kubelet[2731]: I1216 12:26:09.301046 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ec0cfc85455d03f3ca06b5d578a8d17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ec0cfc85455d03f3ca06b5d578a8d17\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:09.301369 kubelet[2731]: I1216 12:26:09.301174 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:09.301369 kubelet[2731]: I1216 12:26:09.301194 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:09.301580 kubelet[2731]: I1216 12:26:09.301462 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:09.301580 kubelet[2731]: I1216 12:26:09.301491 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:09.301580 kubelet[2731]: I1216 12:26:09.301527 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:26:09.301580 kubelet[2731]: I1216 12:26:09.301541 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:26:09.441201 kubelet[2731]: E1216 12:26:09.441156 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:09.441621 kubelet[2731]: E1216 12:26:09.441305 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:09.441621 kubelet[2731]: E1216 12:26:09.441511 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:09.986641 kubelet[2731]: I1216 12:26:09.986534 2731 apiserver.go:52] "Watching apiserver" Dec 16 12:26:10.000260 kubelet[2731]: I1216 12:26:10.000207 2731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:26:10.046795 kubelet[2731]: E1216 12:26:10.046741 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:10.047303 kubelet[2731]: I1216 12:26:10.046937 2731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:10.047303 kubelet[2731]: E1216 12:26:10.047125 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:10.066757 kubelet[2731]: E1216 12:26:10.066485 2731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:26:10.066757 kubelet[2731]: E1216 12:26:10.066680 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:10.127858 kubelet[2731]: I1216 12:26:10.127783 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.127763592 podStartE2EDuration="1.127763592s" podCreationTimestamp="2025-12-16 12:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:10.117225878 +0000 UTC m=+1.206210791" watchObservedRunningTime="2025-12-16 12:26:10.127763592 +0000 UTC m=+1.216748465" Dec 16 12:26:10.128000 kubelet[2731]: I1216 12:26:10.127923 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.127918027 podStartE2EDuration="1.127918027s" podCreationTimestamp="2025-12-16 12:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:10.127883743 +0000 UTC m=+1.216868616" watchObservedRunningTime="2025-12-16 12:26:10.127918027 +0000 UTC m=+1.216902900" Dec 16 12:26:10.148909 kubelet[2731]: I1216 12:26:10.148840 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.148820077 podStartE2EDuration="3.148820077s" podCreationTimestamp="2025-12-16 12:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:10.137214898 +0000 UTC m=+1.226199771" watchObservedRunningTime="2025-12-16 12:26:10.148820077 +0000 UTC m=+1.237804950" Dec 16 12:26:11.049036 kubelet[2731]: E1216 12:26:11.048956 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:11.049429 kubelet[2731]: E1216 12:26:11.049050 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:11.402391 kubelet[2731]: E1216 12:26:11.400727 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:14.665377 kubelet[2731]: E1216 12:26:14.665304 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:14.850736 kubelet[2731]: I1216 12:26:14.850699 2731 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:26:14.851111 containerd[1582]: time="2025-12-16T12:26:14.851077485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:26:14.851611 kubelet[2731]: I1216 12:26:14.851264 2731 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:26:15.067121 kubelet[2731]: E1216 12:26:15.066860 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:15.874274 systemd[1]: Created slice kubepods-besteffort-pod7efbefe5_8c59_42f9_b56d_5f3ee0184ba2.slice - libcontainer container kubepods-besteffort-pod7efbefe5_8c59_42f9_b56d_5f3ee0184ba2.slice. Dec 16 12:26:16.046147 kubelet[2731]: I1216 12:26:16.046106 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7efbefe5-8c59-42f9-b56d-5f3ee0184ba2-kube-proxy\") pod \"kube-proxy-psxhl\" (UID: \"7efbefe5-8c59-42f9-b56d-5f3ee0184ba2\") " pod="kube-system/kube-proxy-psxhl" Dec 16 12:26:16.046147 kubelet[2731]: I1216 12:26:16.046148 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7efbefe5-8c59-42f9-b56d-5f3ee0184ba2-lib-modules\") pod \"kube-proxy-psxhl\" (UID: \"7efbefe5-8c59-42f9-b56d-5f3ee0184ba2\") " pod="kube-system/kube-proxy-psxhl" Dec 16 12:26:16.046147 kubelet[2731]: I1216 12:26:16.046168 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7efbefe5-8c59-42f9-b56d-5f3ee0184ba2-xtables-lock\") pod \"kube-proxy-psxhl\" (UID: \"7efbefe5-8c59-42f9-b56d-5f3ee0184ba2\") " pod="kube-system/kube-proxy-psxhl" Dec 16 12:26:16.046736 kubelet[2731]: I1216 12:26:16.046185 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgfkn\" (UniqueName: \"kubernetes.io/projected/7efbefe5-8c59-42f9-b56d-5f3ee0184ba2-kube-api-access-mgfkn\") pod \"kube-proxy-psxhl\" (UID: \"7efbefe5-8c59-42f9-b56d-5f3ee0184ba2\") " pod="kube-system/kube-proxy-psxhl" Dec 16 12:26:16.066031 kubelet[2731]: E1216 12:26:16.065979 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:16.097149 systemd[1]: Created slice kubepods-besteffort-pod6e16569f_122b_4cb7_865f_6523c0e76da4.slice - libcontainer container kubepods-besteffort-pod6e16569f_122b_4cb7_865f_6523c0e76da4.slice. Dec 16 12:26:16.192680 kubelet[2731]: E1216 12:26:16.192563 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:16.193403 containerd[1582]: time="2025-12-16T12:26:16.193312491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-psxhl,Uid:7efbefe5-8c59-42f9-b56d-5f3ee0184ba2,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:16.216086 containerd[1582]: time="2025-12-16T12:26:16.215955625Z" level=info msg="connecting to shim 6833827bcea83130c6d20c9e3a184b8bf0dad3d5137e41fc669f727e41ec423f" address="unix:///run/containerd/s/5dc240c480d7c4eafc4e45d017b1e376a01cbe60f47db5cdc254bce743aef66c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:16.246675 systemd[1]: Started cri-containerd-6833827bcea83130c6d20c9e3a184b8bf0dad3d5137e41fc669f727e41ec423f.scope - libcontainer container 6833827bcea83130c6d20c9e3a184b8bf0dad3d5137e41fc669f727e41ec423f. Dec 16 12:26:16.248248 kubelet[2731]: I1216 12:26:16.248215 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e16569f-122b-4cb7-865f-6523c0e76da4-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-khvth\" (UID: \"6e16569f-122b-4cb7-865f-6523c0e76da4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-khvth" Dec 16 12:26:16.248377 kubelet[2731]: I1216 12:26:16.248259 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjptc\" (UniqueName: \"kubernetes.io/projected/6e16569f-122b-4cb7-865f-6523c0e76da4-kube-api-access-bjptc\") pod \"tigera-operator-65cdcdfd6d-khvth\" (UID: \"6e16569f-122b-4cb7-865f-6523c0e76da4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-khvth" Dec 16 12:26:16.257000 audit: BPF prog-id=131 op=LOAD Dec 16 12:26:16.258890 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:26:16.258939 kernel: audit: type=1334 audit(1765887976.257:426): prog-id=131 op=LOAD Dec 16 12:26:16.258000 audit: BPF prog-id=132 op=LOAD Dec 16 12:26:16.260487 kernel: audit: type=1334 audit(1765887976.258:427): prog-id=132 op=LOAD Dec 16 12:26:16.258000 audit[2808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.266085 kernel: audit: type=1300 audit(1765887976.258:427): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.269645 kernel: audit: type=1327 audit(1765887976.258:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.259000 audit: BPF prog-id=132 op=UNLOAD Dec 16 12:26:16.270576 kernel: audit: type=1334 audit(1765887976.259:428): prog-id=132 op=UNLOAD Dec 16 12:26:16.259000 audit[2808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.273794 kernel: audit: type=1300 audit(1765887976.259:428): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.273878 kernel: audit: type=1327 audit(1765887976.259:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.259000 audit: BPF prog-id=133 op=LOAD Dec 16 12:26:16.277883 kernel: audit: type=1334 audit(1765887976.259:429): prog-id=133 op=LOAD Dec 16 12:26:16.277952 kernel: audit: type=1300 audit(1765887976.259:429): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.259000 audit[2808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.288820 kernel: audit: type=1327 audit(1765887976.259:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.260000 audit: BPF prog-id=134 op=LOAD Dec 16 12:26:16.260000 audit[2808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.265000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:26:16.265000 audit[2808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.265000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:26:16.265000 audit[2808]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.265000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.266000 audit: BPF prog-id=135 op=LOAD Dec 16 12:26:16.266000 audit[2808]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2797 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333338323762636561383331333063366432306339653361313834 Dec 16 12:26:16.303781 containerd[1582]: time="2025-12-16T12:26:16.303733488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-psxhl,Uid:7efbefe5-8c59-42f9-b56d-5f3ee0184ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6833827bcea83130c6d20c9e3a184b8bf0dad3d5137e41fc669f727e41ec423f\"" Dec 16 12:26:16.304743 kubelet[2731]: E1216 12:26:16.304714 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:16.311827 containerd[1582]: time="2025-12-16T12:26:16.311717559Z" level=info msg="CreateContainer within sandbox \"6833827bcea83130c6d20c9e3a184b8bf0dad3d5137e41fc669f727e41ec423f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:26:16.347313 containerd[1582]: time="2025-12-16T12:26:16.347258048Z" level=info msg="Container c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:16.418162 containerd[1582]: time="2025-12-16T12:26:16.417905775Z" level=info msg="CreateContainer within sandbox \"6833827bcea83130c6d20c9e3a184b8bf0dad3d5137e41fc669f727e41ec423f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f\"" Dec 16 12:26:16.418718 containerd[1582]: time="2025-12-16T12:26:16.418682960Z" level=info msg="StartContainer for \"c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f\"" Dec 16 12:26:16.420952 containerd[1582]: time="2025-12-16T12:26:16.420738639Z" level=info msg="connecting to shim c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f" address="unix:///run/containerd/s/5dc240c480d7c4eafc4e45d017b1e376a01cbe60f47db5cdc254bce743aef66c" protocol=ttrpc version=3 Dec 16 12:26:16.428249 containerd[1582]: time="2025-12-16T12:26:16.428198661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-khvth,Uid:6e16569f-122b-4cb7-865f-6523c0e76da4,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:26:16.452605 systemd[1]: Started cri-containerd-c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f.scope - libcontainer container c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f. Dec 16 12:26:16.460923 containerd[1582]: time="2025-12-16T12:26:16.460743987Z" level=info msg="connecting to shim 82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4" address="unix:///run/containerd/s/c727e097d67ce5f89e2c93dc1ae8c0e57bccb3342a9821957ebaca7b77ac65ea" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:16.501576 systemd[1]: Started cri-containerd-82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4.scope - libcontainer container 82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4. Dec 16 12:26:16.512000 audit: BPF prog-id=136 op=LOAD Dec 16 12:26:16.512000 audit[2835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2797 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330363337656566646633613134346634393539306138376664616534 Dec 16 12:26:16.512000 audit: BPF prog-id=137 op=LOAD Dec 16 12:26:16.512000 audit[2835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2797 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330363337656566646633613134346634393539306138376664616534 Dec 16 12:26:16.512000 audit: BPF prog-id=137 op=UNLOAD Dec 16 12:26:16.512000 audit[2835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2797 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330363337656566646633613134346634393539306138376664616534 Dec 16 12:26:16.512000 audit: BPF prog-id=136 op=UNLOAD Dec 16 12:26:16.512000 audit[2835]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2797 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330363337656566646633613134346634393539306138376664616534 Dec 16 12:26:16.512000 audit: BPF prog-id=138 op=LOAD Dec 16 12:26:16.512000 audit[2835]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2797 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330363337656566646633613134346634393539306138376664616534 Dec 16 12:26:16.514000 audit: BPF prog-id=139 op=LOAD Dec 16 12:26:16.514000 audit: BPF prog-id=140 op=LOAD Dec 16 12:26:16.514000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.515000 audit: BPF prog-id=140 op=UNLOAD Dec 16 12:26:16.515000 audit[2872]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.515000 audit: BPF prog-id=141 op=LOAD Dec 16 12:26:16.515000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.515000 audit: BPF prog-id=142 op=LOAD Dec 16 12:26:16.515000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.515000 audit: BPF prog-id=142 op=UNLOAD Dec 16 12:26:16.515000 audit[2872]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.515000 audit: BPF prog-id=141 op=UNLOAD Dec 16 12:26:16.515000 audit[2872]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.515000 audit: BPF prog-id=143 op=LOAD Dec 16 12:26:16.515000 audit[2872]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2861 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832643232653763343664626164303734616236626530316232613732 Dec 16 12:26:16.545256 containerd[1582]: time="2025-12-16T12:26:16.545060129Z" level=info msg="StartContainer for \"c0637eefdf3a144f49590a87fdae40d872380f126d593a57b879efba0b5a903f\" returns successfully" Dec 16 12:26:16.546200 containerd[1582]: time="2025-12-16T12:26:16.546060265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-khvth,Uid:6e16569f-122b-4cb7-865f-6523c0e76da4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4\"" Dec 16 12:26:16.549795 containerd[1582]: time="2025-12-16T12:26:16.549756867Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:26:16.788000 audit[2943]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.788000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff26cc6e0 a2=0 a3=1 items=0 ppid=2851 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:26:16.788000 audit[2944]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.788000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff50bd850 a2=0 a3=1 items=0 ppid=2851 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:26:16.790000 audit[2945]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.790000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe34c6930 a2=0 a3=1 items=0 ppid=2851 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.790000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:26:16.792000 audit[2949]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.792000 audit[2949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe65893c0 a2=0 a3=1 items=0 ppid=2851 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:26:16.793000 audit[2947]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.793000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff82c6f10 a2=0 a3=1 items=0 ppid=2851 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:26:16.796000 audit[2951]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.796000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe894ed80 a2=0 a3=1 items=0 ppid=2851 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:26:16.892000 audit[2952]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.892000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffb8d4f30 a2=0 a3=1 items=0 ppid=2851 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:26:16.895000 audit[2954]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.895000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffb791f00 a2=0 a3=1 items=0 ppid=2851 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 16 12:26:16.899000 audit[2957]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.899000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffd3804d0 a2=0 a3=1 items=0 ppid=2851 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:26:16.900000 audit[2958]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.900000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb65cac0 a2=0 a3=1 items=0 ppid=2851 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.900000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:26:16.903000 audit[2960]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.903000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd4ece360 a2=0 a3=1 items=0 ppid=2851 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:26:16.904000 audit[2961]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.904000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa6c77b0 a2=0 a3=1 items=0 ppid=2851 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:26:16.908000 audit[2963]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.908000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff7cd4230 a2=0 a3=1 items=0 ppid=2851 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:16.912000 audit[2966]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.912000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffce0d3b60 a2=0 a3=1 items=0 ppid=2851 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:16.913000 audit[2967]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.913000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcfe6880 a2=0 a3=1 items=0 ppid=2851 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:26:16.916000 audit[2969]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.916000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffec3b1d20 a2=0 a3=1 items=0 ppid=2851 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:26:16.917000 audit[2970]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.917000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffef8b89a0 a2=0 a3=1 items=0 ppid=2851 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.917000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:26:16.921000 audit[2972]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.921000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6062c50 a2=0 a3=1 items=0 ppid=2851 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 16 12:26:16.925000 audit[2975]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.925000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2aaf110 a2=0 a3=1 items=0 ppid=2851 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:26:16.930000 audit[2978]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.930000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe62cfae0 a2=0 a3=1 items=0 ppid=2851 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:26:16.932000 audit[2979]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.932000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4d00890 a2=0 a3=1 items=0 ppid=2851 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:26:16.935000 audit[2981]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.935000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcaa9d7a0 a2=0 a3=1 items=0 ppid=2851 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:16.938000 audit[2984]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.938000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2f540a0 a2=0 a3=1 items=0 ppid=2851 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:16.940000 audit[2985]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.940000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc058f2a0 a2=0 a3=1 items=0 ppid=2851 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:26:16.943000 audit[2987]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:26:16.943000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffede1a830 a2=0 a3=1 items=0 ppid=2851 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:26:16.968000 audit[2993]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:16.968000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff6660ba0 a2=0 a3=1 items=0 ppid=2851 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:16.987000 audit[2993]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:16.987000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff6660ba0 a2=0 a3=1 items=0 ppid=2851 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:16.988000 audit[2998]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.988000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffff006d90 a2=0 a3=1 items=0 ppid=2851 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:26:16.992000 audit[3000]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.992000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd102bb40 a2=0 a3=1 items=0 ppid=2851 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 16 12:26:16.996000 audit[3003]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.996000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe91128e0 a2=0 a3=1 items=0 ppid=2851 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 16 12:26:16.997000 audit[3004]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:16.997000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb369520 a2=0 a3=1 items=0 ppid=2851 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:16.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:26:17.000000 audit[3006]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.000000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7f44690 a2=0 a3=1 items=0 ppid=2851 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:26:17.002000 audit[3007]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.002000 audit[3007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0093650 a2=0 a3=1 items=0 ppid=2851 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:26:17.005000 audit[3009]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.005000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd7cadf30 a2=0 a3=1 items=0 ppid=2851 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:17.010000 audit[3012]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.010000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff88a4600 a2=0 a3=1 items=0 ppid=2851 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:17.011000 audit[3013]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.011000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7200d00 a2=0 a3=1 items=0 ppid=2851 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:26:17.014000 audit[3015]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.014000 audit[3015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdca4ebb0 a2=0 a3=1 items=0 ppid=2851 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:26:17.015000 audit[3016]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.015000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcafa6b40 a2=0 a3=1 items=0 ppid=2851 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:26:17.019000 audit[3018]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.019000 audit[3018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdb0fcf50 a2=0 a3=1 items=0 ppid=2851 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 16 12:26:17.023000 audit[3021]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.023000 audit[3021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd8305550 a2=0 a3=1 items=0 ppid=2851 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 16 12:26:17.028000 audit[3024]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.028000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed14e190 a2=0 a3=1 items=0 ppid=2851 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 16 12:26:17.030000 audit[3025]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.030000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc4d78310 a2=0 a3=1 items=0 ppid=2851 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:26:17.034000 audit[3027]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.034000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff46071d0 a2=0 a3=1 items=0 ppid=2851 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:17.038000 audit[3030]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.038000 audit[3030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcb97d110 a2=0 a3=1 items=0 ppid=2851 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:26:17.039000 audit[3031]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.039000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc70dad50 a2=0 a3=1 items=0 ppid=2851 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:26:17.042000 audit[3033]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.042000 audit[3033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe47c7a40 a2=0 a3=1 items=0 ppid=2851 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:26:17.043000 audit[3034]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.043000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb1cdd70 a2=0 a3=1 items=0 ppid=2851 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:26:17.047000 audit[3036]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.047000 audit[3036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffebfcbbf0 a2=0 a3=1 items=0 ppid=2851 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:26:17.051000 audit[3039]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:26:17.051000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff8a90260 a2=0 a3=1 items=0 ppid=2851 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:26:17.056000 audit[3041]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:26:17.056000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd0710ad0 a2=0 a3=1 items=0 ppid=2851 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.056000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:17.057000 audit[3041]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:26:17.057000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd0710ad0 a2=0 a3=1 items=0 ppid=2851 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:17.057000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:17.074815 kubelet[2731]: E1216 12:26:17.073947 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:17.164161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716605711.mount: Deactivated successfully. Dec 16 12:26:17.617187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409521100.mount: Deactivated successfully. Dec 16 12:26:19.199445 kubelet[2731]: E1216 12:26:19.199411 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:19.225186 kubelet[2731]: I1216 12:26:19.225119 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-psxhl" podStartSLOduration=4.225103439 podStartE2EDuration="4.225103439s" podCreationTimestamp="2025-12-16 12:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:17.088824262 +0000 UTC m=+8.177809135" watchObservedRunningTime="2025-12-16 12:26:19.225103439 +0000 UTC m=+10.314088312" Dec 16 12:26:19.836094 containerd[1582]: time="2025-12-16T12:26:19.836026363Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:26:19.841287 containerd[1582]: time="2025-12-16T12:26:19.841232955Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.291433773s" Dec 16 12:26:19.841287 containerd[1582]: time="2025-12-16T12:26:19.841286352Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:26:19.847117 containerd[1582]: time="2025-12-16T12:26:19.847066388Z" level=info msg="CreateContainer within sandbox \"82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:26:19.857623 containerd[1582]: time="2025-12-16T12:26:19.856055646Z" level=info msg="Container ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:19.862798 containerd[1582]: time="2025-12-16T12:26:19.862619634Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:19.863560 containerd[1582]: time="2025-12-16T12:26:19.863526714Z" level=info msg="CreateContainer within sandbox \"82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f\"" Dec 16 12:26:19.863748 containerd[1582]: time="2025-12-16T12:26:19.863544326Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:19.864800 containerd[1582]: time="2025-12-16T12:26:19.864734366Z" level=info msg="StartContainer for \"ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f\"" Dec 16 12:26:19.865421 containerd[1582]: time="2025-12-16T12:26:19.865378180Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:19.865745 containerd[1582]: time="2025-12-16T12:26:19.865648650Z" level=info msg="connecting to shim ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f" address="unix:///run/containerd/s/c727e097d67ce5f89e2c93dc1ae8c0e57bccb3342a9821957ebaca7b77ac65ea" protocol=ttrpc version=3 Dec 16 12:26:19.908578 systemd[1]: Started cri-containerd-ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f.scope - libcontainer container ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f. Dec 16 12:26:19.918000 audit: BPF prog-id=144 op=LOAD Dec 16 12:26:19.918000 audit: BPF prog-id=145 op=LOAD Dec 16 12:26:19.918000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.918000 audit: BPF prog-id=145 op=UNLOAD Dec 16 12:26:19.918000 audit[3052]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.918000 audit: BPF prog-id=146 op=LOAD Dec 16 12:26:19.918000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.918000 audit: BPF prog-id=147 op=LOAD Dec 16 12:26:19.918000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.918000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.919000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:26:19.919000 audit[3052]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.919000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:26:19.919000 audit[3052]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.919000 audit: BPF prog-id=148 op=LOAD Dec 16 12:26:19.919000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2861 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:19.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365383539626264623339303836613031383932346164653631366337 Dec 16 12:26:19.958175 containerd[1582]: time="2025-12-16T12:26:19.958118011Z" level=info msg="StartContainer for \"ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f\" returns successfully" Dec 16 12:26:20.080304 kubelet[2731]: E1216 12:26:20.080263 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:20.144792 kubelet[2731]: I1216 12:26:20.144585 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-khvth" podStartSLOduration=0.851952181 podStartE2EDuration="4.144567409s" podCreationTimestamp="2025-12-16 12:26:16 +0000 UTC" firstStartedPulling="2025-12-16 12:26:16.549370297 +0000 UTC m=+7.638355170" lastFinishedPulling="2025-12-16 12:26:19.841985565 +0000 UTC m=+10.930970398" observedRunningTime="2025-12-16 12:26:20.144399818 +0000 UTC m=+11.233384691" watchObservedRunningTime="2025-12-16 12:26:20.144567409 +0000 UTC m=+11.233552242" Dec 16 12:26:21.409762 kubelet[2731]: E1216 12:26:21.409719 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:22.076445 systemd[1]: cri-containerd-ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f.scope: Deactivated successfully. Dec 16 12:26:22.082000 audit: BPF prog-id=144 op=UNLOAD Dec 16 12:26:22.084083 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:26:22.084189 kernel: audit: type=1334 audit(1765887982.082:506): prog-id=144 op=UNLOAD Dec 16 12:26:22.083000 audit: BPF prog-id=148 op=UNLOAD Dec 16 12:26:22.087346 kernel: audit: type=1334 audit(1765887982.083:507): prog-id=148 op=UNLOAD Dec 16 12:26:22.107097 containerd[1582]: time="2025-12-16T12:26:22.107041145Z" level=info msg="received container exit event container_id:\"ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f\" id:\"ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f\" pid:3065 exit_status:1 exited_at:{seconds:1765887982 nanos:101652654}" Dec 16 12:26:22.138894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f-rootfs.mount: Deactivated successfully. Dec 16 12:26:23.097493 kubelet[2731]: I1216 12:26:23.097441 2731 scope.go:117] "RemoveContainer" containerID="ce859bbdb39086a018924ade616c7aa4af0aba6bc6c8e743c5588f787c7bee2f" Dec 16 12:26:23.102087 containerd[1582]: time="2025-12-16T12:26:23.102000353Z" level=info msg="CreateContainer within sandbox \"82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:26:23.113357 containerd[1582]: time="2025-12-16T12:26:23.112868794Z" level=info msg="Container e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:23.131393 containerd[1582]: time="2025-12-16T12:26:23.131337775Z" level=info msg="CreateContainer within sandbox \"82d22e7c46dbad074ab6be01b2a7269c79f5bb5006a2fb5689d2bab6e01715a4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170\"" Dec 16 12:26:23.132123 containerd[1582]: time="2025-12-16T12:26:23.132097149Z" level=info msg="StartContainer for \"e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170\"" Dec 16 12:26:23.133267 containerd[1582]: time="2025-12-16T12:26:23.133231847Z" level=info msg="connecting to shim e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170" address="unix:///run/containerd/s/c727e097d67ce5f89e2c93dc1ae8c0e57bccb3342a9821957ebaca7b77ac65ea" protocol=ttrpc version=3 Dec 16 12:26:23.172569 systemd[1]: Started cri-containerd-e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170.scope - libcontainer container e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170. Dec 16 12:26:23.197000 audit: BPF prog-id=149 op=LOAD Dec 16 12:26:23.200692 kernel: audit: type=1334 audit(1765887983.197:508): prog-id=149 op=LOAD Dec 16 12:26:23.200769 kernel: audit: type=1334 audit(1765887983.198:509): prog-id=150 op=LOAD Dec 16 12:26:23.198000 audit: BPF prog-id=150 op=LOAD Dec 16 12:26:23.198000 audit[3129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.204667 kernel: audit: type=1300 audit(1765887983.198:509): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.208194 kernel: audit: type=1327 audit(1765887983.198:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.198000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:26:23.198000 audit[3129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.212959 kernel: audit: type=1334 audit(1765887983.198:510): prog-id=150 op=UNLOAD Dec 16 12:26:23.213042 kernel: audit: type=1300 audit(1765887983.198:510): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.198000 audit: BPF prog-id=151 op=LOAD Dec 16 12:26:23.217494 kernel: audit: type=1327 audit(1765887983.198:510): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.217551 kernel: audit: type=1334 audit(1765887983.198:511): prog-id=151 op=LOAD Dec 16 12:26:23.198000 audit[3129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.199000 audit: BPF prog-id=152 op=LOAD Dec 16 12:26:23.199000 audit[3129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.206000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:26:23.206000 audit[3129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.206000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:26:23.206000 audit[3129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.207000 audit: BPF prog-id=153 op=LOAD Dec 16 12:26:23.207000 audit[3129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=2861 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:23.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534303461303963626565346133313930633062353538396531343030 Dec 16 12:26:23.239309 containerd[1582]: time="2025-12-16T12:26:23.239249361Z" level=info msg="StartContainer for \"e404a09cbee4a3190c0b5589e1400e15f350a1874ef7c9fc14cf339c80fc4170\" returns successfully" Dec 16 12:26:24.468838 update_engine[1555]: I20251216 12:26:24.468636 1555 update_attempter.cc:509] Updating boot flags... Dec 16 12:26:25.580900 sudo[1787]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:25.580000 audit[1787]: USER_END pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:26:25.580000 audit[1787]: CRED_DISP pid=1787 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:26:25.583482 sshd[1786]: Connection closed by 10.0.0.1 port 34534 Dec 16 12:26:25.584014 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:25.586000 audit[1783]: USER_END pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:25.586000 audit[1783]: CRED_DISP pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:25.589847 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:34534.service: Deactivated successfully. Dec 16 12:26:25.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.45:22-10.0.0.1:34534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:25.591848 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:26:25.592057 systemd[1]: session-7.scope: Consumed 5.393s CPU time, 216.2M memory peak. Dec 16 12:26:25.592993 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:26:25.594426 systemd-logind[1553]: Removed session 7. Dec 16 12:26:28.018000 audit[3202]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:28.022200 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:26:28.022301 kernel: audit: type=1325 audit(1765887988.018:521): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:28.018000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd2472e20 a2=0 a3=1 items=0 ppid=2851 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.027313 kernel: audit: type=1300 audit(1765887988.018:521): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd2472e20 a2=0 a3=1 items=0 ppid=2851 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:28.030151 kernel: audit: type=1327 audit(1765887988.018:521): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:28.030235 kernel: audit: type=1325 audit(1765887988.029:522): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:28.029000 audit[3202]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:28.029000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd2472e20 a2=0 a3=1 items=0 ppid=2851 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:28.042351 kernel: audit: type=1300 audit(1765887988.029:522): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd2472e20 a2=0 a3=1 items=0 ppid=2851 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:28.046343 kernel: audit: type=1327 audit(1765887988.029:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:29.051000 audit[3204]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:29.051000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffed477bc0 a2=0 a3=1 items=0 ppid=2851 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.058995 kernel: audit: type=1325 audit(1765887989.051:523): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:29.059091 kernel: audit: type=1300 audit(1765887989.051:523): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffed477bc0 a2=0 a3=1 items=0 ppid=2851 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:29.061348 kernel: audit: type=1327 audit(1765887989.051:523): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:29.060000 audit[3204]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:29.060000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed477bc0 a2=0 a3=1 items=0 ppid=2851 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:29.064426 kernel: audit: type=1325 audit(1765887989.060:524): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:29.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:34.251000 audit[3207]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:34.255462 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 12:26:34.255538 kernel: audit: type=1325 audit(1765887994.251:525): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:34.251000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc2851030 a2=0 a3=1 items=0 ppid=2851 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:34.259647 kernel: audit: type=1300 audit(1765887994.251:525): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc2851030 a2=0 a3=1 items=0 ppid=2851 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:34.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:34.263417 kernel: audit: type=1327 audit(1765887994.251:525): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:34.262000 audit[3207]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:34.262000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2851030 a2=0 a3=1 items=0 ppid=2851 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:34.270700 kernel: audit: type=1325 audit(1765887994.262:526): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:34.270798 kernel: audit: type=1300 audit(1765887994.262:526): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2851030 a2=0 a3=1 items=0 ppid=2851 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:34.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:34.277368 kernel: audit: type=1327 audit(1765887994.262:526): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:35.281000 audit[3211]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:35.281000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca7bc130 a2=0 a3=1 items=0 ppid=2851 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:35.288688 kernel: audit: type=1325 audit(1765887995.281:527): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:35.288757 kernel: audit: type=1300 audit(1765887995.281:527): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca7bc130 a2=0 a3=1 items=0 ppid=2851 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:35.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:35.290377 kernel: audit: type=1327 audit(1765887995.281:527): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:35.292000 audit[3211]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:35.292000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca7bc130 a2=0 a3=1 items=0 ppid=2851 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:35.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:35.296352 kernel: audit: type=1325 audit(1765887995.292:528): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:37.375000 audit[3213]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:37.375000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffce52ad90 a2=0 a3=1 items=0 ppid=2851 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:37.385000 audit[3213]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:37.385000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce52ad90 a2=0 a3=1 items=0 ppid=2851 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:37.400792 kubelet[2731]: I1216 12:26:37.399607 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6006b10f-1f2d-4697-8227-8e9385460e5d-typha-certs\") pod \"calico-typha-698945d8f4-hclzx\" (UID: \"6006b10f-1f2d-4697-8227-8e9385460e5d\") " pod="calico-system/calico-typha-698945d8f4-hclzx" Dec 16 12:26:37.400792 kubelet[2731]: I1216 12:26:37.399665 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhtg\" (UniqueName: \"kubernetes.io/projected/6006b10f-1f2d-4697-8227-8e9385460e5d-kube-api-access-srhtg\") pod \"calico-typha-698945d8f4-hclzx\" (UID: \"6006b10f-1f2d-4697-8227-8e9385460e5d\") " pod="calico-system/calico-typha-698945d8f4-hclzx" Dec 16 12:26:37.400792 kubelet[2731]: I1216 12:26:37.399686 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6006b10f-1f2d-4697-8227-8e9385460e5d-tigera-ca-bundle\") pod \"calico-typha-698945d8f4-hclzx\" (UID: \"6006b10f-1f2d-4697-8227-8e9385460e5d\") " pod="calico-system/calico-typha-698945d8f4-hclzx" Dec 16 12:26:37.400720 systemd[1]: Created slice kubepods-besteffort-pod6006b10f_1f2d_4697_8227_8e9385460e5d.slice - libcontainer container kubepods-besteffort-pod6006b10f_1f2d_4697_8227_8e9385460e5d.slice. Dec 16 12:26:37.626219 systemd[1]: Created slice kubepods-besteffort-pod04b625ee_cb3c_43f8_b021_c1b73b3f1ae3.slice - libcontainer container kubepods-besteffort-pod04b625ee_cb3c_43f8_b021_c1b73b3f1ae3.slice. Dec 16 12:26:37.701981 kubelet[2731]: I1216 12:26:37.701916 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-cni-bin-dir\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.701981 kubelet[2731]: I1216 12:26:37.701967 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-var-run-calico\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.701981 kubelet[2731]: I1216 12:26:37.701991 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-policysync\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702180 kubelet[2731]: I1216 12:26:37.702007 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-lib-modules\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702180 kubelet[2731]: I1216 12:26:37.702079 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-tigera-ca-bundle\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702180 kubelet[2731]: I1216 12:26:37.702151 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-flexvol-driver-host\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702246 kubelet[2731]: I1216 12:26:37.702192 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-node-certs\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702246 kubelet[2731]: I1216 12:26:37.702215 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6b62\" (UniqueName: \"kubernetes.io/projected/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-kube-api-access-r6b62\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702246 kubelet[2731]: I1216 12:26:37.702234 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-cni-net-dir\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702302 kubelet[2731]: I1216 12:26:37.702263 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-cni-log-dir\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702336 kubelet[2731]: I1216 12:26:37.702310 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-var-lib-calico\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.702378 kubelet[2731]: I1216 12:26:37.702357 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04b625ee-cb3c-43f8-b021-c1b73b3f1ae3-xtables-lock\") pod \"calico-node-mqw9w\" (UID: \"04b625ee-cb3c-43f8-b021-c1b73b3f1ae3\") " pod="calico-system/calico-node-mqw9w" Dec 16 12:26:37.711116 kubelet[2731]: E1216 12:26:37.710965 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:37.711677 containerd[1582]: time="2025-12-16T12:26:37.711609695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698945d8f4-hclzx,Uid:6006b10f-1f2d-4697-8227-8e9385460e5d,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:37.753714 containerd[1582]: time="2025-12-16T12:26:37.753666180Z" level=info msg="connecting to shim 09d1ef32d97421550e43678712aa6cd14282537ad92c1e787998f7af409551fd" address="unix:///run/containerd/s/bfd9638d048b622e7211d583ea26fa4037c53111703f0828325d2f99107ada7f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:37.790497 systemd[1]: Started cri-containerd-09d1ef32d97421550e43678712aa6cd14282537ad92c1e787998f7af409551fd.scope - libcontainer container 09d1ef32d97421550e43678712aa6cd14282537ad92c1e787998f7af409551fd. Dec 16 12:26:37.815950 kubelet[2731]: E1216 12:26:37.815639 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.815950 kubelet[2731]: W1216 12:26:37.815687 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.815950 kubelet[2731]: E1216 12:26:37.815713 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.817260 kubelet[2731]: E1216 12:26:37.817202 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.817260 kubelet[2731]: W1216 12:26:37.817230 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.817582 kubelet[2731]: E1216 12:26:37.817254 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.820039 kubelet[2731]: E1216 12:26:37.819996 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:26:37.836421 kubelet[2731]: E1216 12:26:37.836380 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.836703 kubelet[2731]: W1216 12:26:37.836665 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.836779 kubelet[2731]: E1216 12:26:37.836711 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.863000 audit: BPF prog-id=154 op=LOAD Dec 16 12:26:37.864000 audit: BPF prog-id=155 op=LOAD Dec 16 12:26:37.864000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.864000 audit: BPF prog-id=155 op=UNLOAD Dec 16 12:26:37.864000 audit[3235]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.864000 audit: BPF prog-id=156 op=LOAD Dec 16 12:26:37.864000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.864000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.865000 audit: BPF prog-id=157 op=LOAD Dec 16 12:26:37.865000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.865000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:26:37.865000 audit[3235]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.865000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:26:37.865000 audit[3235]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.865000 audit: BPF prog-id=158 op=LOAD Dec 16 12:26:37.865000 audit[3235]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3225 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039643165663332643937343231353530653433363738373132616136 Dec 16 12:26:37.894380 kubelet[2731]: E1216 12:26:37.893610 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.894380 kubelet[2731]: W1216 12:26:37.893641 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.894380 kubelet[2731]: E1216 12:26:37.893663 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.894380 kubelet[2731]: E1216 12:26:37.893868 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.894380 kubelet[2731]: W1216 12:26:37.893877 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.894380 kubelet[2731]: E1216 12:26:37.893933 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.894380 kubelet[2731]: E1216 12:26:37.894237 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.894380 kubelet[2731]: W1216 12:26:37.894247 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.894380 kubelet[2731]: E1216 12:26:37.894257 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.895409 kubelet[2731]: E1216 12:26:37.894927 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.895409 kubelet[2731]: W1216 12:26:37.894936 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.895409 kubelet[2731]: E1216 12:26:37.894946 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.895409 kubelet[2731]: E1216 12:26:37.895183 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.895409 kubelet[2731]: W1216 12:26:37.895192 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.895409 kubelet[2731]: E1216 12:26:37.895201 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.895805 kubelet[2731]: E1216 12:26:37.895680 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.895805 kubelet[2731]: W1216 12:26:37.895692 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.895805 kubelet[2731]: E1216 12:26:37.895704 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.896367 kubelet[2731]: E1216 12:26:37.896346 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.896367 kubelet[2731]: W1216 12:26:37.896364 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.896469 kubelet[2731]: E1216 12:26:37.896377 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.896570 kubelet[2731]: E1216 12:26:37.896559 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.896640 kubelet[2731]: W1216 12:26:37.896571 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.896680 kubelet[2731]: E1216 12:26:37.896643 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.896864 kubelet[2731]: E1216 12:26:37.896850 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.896864 kubelet[2731]: W1216 12:26:37.896863 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.896927 kubelet[2731]: E1216 12:26:37.896872 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.897140 kubelet[2731]: E1216 12:26:37.897124 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.897140 kubelet[2731]: W1216 12:26:37.897139 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.897225 kubelet[2731]: E1216 12:26:37.897149 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.897379 kubelet[2731]: E1216 12:26:37.897365 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.897379 kubelet[2731]: W1216 12:26:37.897377 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.897451 kubelet[2731]: E1216 12:26:37.897389 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.897626 kubelet[2731]: E1216 12:26:37.897611 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.897626 kubelet[2731]: W1216 12:26:37.897625 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.897685 kubelet[2731]: E1216 12:26:37.897634 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.897843 kubelet[2731]: E1216 12:26:37.897818 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.897843 kubelet[2731]: W1216 12:26:37.897843 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.897920 kubelet[2731]: E1216 12:26:37.897851 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.898044 kubelet[2731]: E1216 12:26:37.898030 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.898044 kubelet[2731]: W1216 12:26:37.898044 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.898115 kubelet[2731]: E1216 12:26:37.898052 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.898240 kubelet[2731]: E1216 12:26:37.898208 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.898240 kubelet[2731]: W1216 12:26:37.898238 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.898302 kubelet[2731]: E1216 12:26:37.898248 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.898452 kubelet[2731]: E1216 12:26:37.898432 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.898452 kubelet[2731]: W1216 12:26:37.898445 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.898507 kubelet[2731]: E1216 12:26:37.898461 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.898666 kubelet[2731]: E1216 12:26:37.898637 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.898708 kubelet[2731]: W1216 12:26:37.898665 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.898708 kubelet[2731]: E1216 12:26:37.898676 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.898854 kubelet[2731]: E1216 12:26:37.898842 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.898854 kubelet[2731]: W1216 12:26:37.898853 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.898917 kubelet[2731]: E1216 12:26:37.898862 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.899037 kubelet[2731]: E1216 12:26:37.899021 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.899037 kubelet[2731]: W1216 12:26:37.899033 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.899126 kubelet[2731]: E1216 12:26:37.899041 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.899232 kubelet[2731]: E1216 12:26:37.899205 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.899270 kubelet[2731]: W1216 12:26:37.899232 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.899270 kubelet[2731]: E1216 12:26:37.899243 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.904941 kubelet[2731]: E1216 12:26:37.904765 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.904941 kubelet[2731]: W1216 12:26:37.904788 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.904941 kubelet[2731]: E1216 12:26:37.904806 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.904941 kubelet[2731]: I1216 12:26:37.904834 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ba61881-a1a2-472c-ad8a-7b1172620126-socket-dir\") pod \"csi-node-driver-8dw5m\" (UID: \"8ba61881-a1a2-472c-ad8a-7b1172620126\") " pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:37.905273 containerd[1582]: time="2025-12-16T12:26:37.905228681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698945d8f4-hclzx,Uid:6006b10f-1f2d-4697-8227-8e9385460e5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"09d1ef32d97421550e43678712aa6cd14282537ad92c1e787998f7af409551fd\"" Dec 16 12:26:37.905458 kubelet[2731]: E1216 12:26:37.905239 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.905458 kubelet[2731]: W1216 12:26:37.905350 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.905458 kubelet[2731]: E1216 12:26:37.905370 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.905458 kubelet[2731]: I1216 12:26:37.905396 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ba61881-a1a2-472c-ad8a-7b1172620126-kubelet-dir\") pod \"csi-node-driver-8dw5m\" (UID: \"8ba61881-a1a2-472c-ad8a-7b1172620126\") " pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:37.905738 kubelet[2731]: E1216 12:26:37.905712 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.905738 kubelet[2731]: W1216 12:26:37.905732 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.905914 kubelet[2731]: E1216 12:26:37.905746 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.905965 kubelet[2731]: E1216 12:26:37.905951 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.905965 kubelet[2731]: W1216 12:26:37.905961 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.906043 kubelet[2731]: E1216 12:26:37.905970 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.906198 kubelet[2731]: E1216 12:26:37.906180 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.906389 kubelet[2731]: W1216 12:26:37.906193 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.906389 kubelet[2731]: E1216 12:26:37.906390 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.906480 kubelet[2731]: I1216 12:26:37.906418 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ba61881-a1a2-472c-ad8a-7b1172620126-registration-dir\") pod \"csi-node-driver-8dw5m\" (UID: \"8ba61881-a1a2-472c-ad8a-7b1172620126\") " pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:37.906774 kubelet[2731]: E1216 12:26:37.906674 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.906774 kubelet[2731]: W1216 12:26:37.906690 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.906774 kubelet[2731]: E1216 12:26:37.906700 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.906774 kubelet[2731]: I1216 12:26:37.906769 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ba61881-a1a2-472c-ad8a-7b1172620126-varrun\") pod \"csi-node-driver-8dw5m\" (UID: \"8ba61881-a1a2-472c-ad8a-7b1172620126\") " pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:37.907011 kubelet[2731]: E1216 12:26:37.906963 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.907011 kubelet[2731]: W1216 12:26:37.906973 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.907011 kubelet[2731]: E1216 12:26:37.906983 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.907514 kubelet[2731]: E1216 12:26:37.907494 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.907514 kubelet[2731]: W1216 12:26:37.907514 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.907615 kubelet[2731]: E1216 12:26:37.907530 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.907910 kubelet[2731]: E1216 12:26:37.907892 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.907910 kubelet[2731]: W1216 12:26:37.907908 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.907988 kubelet[2731]: E1216 12:26:37.907921 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.908119 kubelet[2731]: I1216 12:26:37.907949 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7s4\" (UniqueName: \"kubernetes.io/projected/8ba61881-a1a2-472c-ad8a-7b1172620126-kube-api-access-mj7s4\") pod \"csi-node-driver-8dw5m\" (UID: \"8ba61881-a1a2-472c-ad8a-7b1172620126\") " pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:37.908260 kubelet[2731]: E1216 12:26:37.908232 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:37.908508 kubelet[2731]: E1216 12:26:37.908488 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.908605 kubelet[2731]: W1216 12:26:37.908589 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.908691 kubelet[2731]: E1216 12:26:37.908674 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.909063 kubelet[2731]: E1216 12:26:37.909009 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.909063 kubelet[2731]: W1216 12:26:37.909024 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.909063 kubelet[2731]: E1216 12:26:37.909036 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.909618 kubelet[2731]: E1216 12:26:37.909574 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.909618 kubelet[2731]: W1216 12:26:37.909591 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.909618 kubelet[2731]: E1216 12:26:37.909604 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.910050 kubelet[2731]: E1216 12:26:37.910034 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.910164 kubelet[2731]: W1216 12:26:37.910120 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.910164 kubelet[2731]: E1216 12:26:37.910142 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.910669 kubelet[2731]: E1216 12:26:37.910581 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.910669 kubelet[2731]: W1216 12:26:37.910616 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.910669 kubelet[2731]: E1216 12:26:37.910634 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.911277 kubelet[2731]: E1216 12:26:37.911228 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:37.911277 kubelet[2731]: W1216 12:26:37.911251 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:37.911570 kubelet[2731]: E1216 12:26:37.911548 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:37.912009 containerd[1582]: time="2025-12-16T12:26:37.911980412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:26:37.931545 kubelet[2731]: E1216 12:26:37.931171 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:37.932035 containerd[1582]: time="2025-12-16T12:26:37.931993750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mqw9w,Uid:04b625ee-cb3c-43f8-b021-c1b73b3f1ae3,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:37.954310 containerd[1582]: time="2025-12-16T12:26:37.954259906Z" level=info msg="connecting to shim 5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5" address="unix:///run/containerd/s/be5c97897981944d6e0559d6e9fb8d4e6a2f0f2d031e471fd39e512e301678d5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:37.981626 systemd[1]: Started cri-containerd-5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5.scope - libcontainer container 5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5. Dec 16 12:26:37.993000 audit: BPF prog-id=159 op=LOAD Dec 16 12:26:37.993000 audit: BPF prog-id=160 op=LOAD Dec 16 12:26:37.993000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:37.993000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:26:37.993000 audit[3335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:37.994000 audit: BPF prog-id=161 op=LOAD Dec 16 12:26:37.994000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:37.994000 audit: BPF prog-id=162 op=LOAD Dec 16 12:26:37.994000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:37.994000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:26:37.994000 audit[3335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:37.994000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:26:37.994000 audit[3335]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:37.994000 audit: BPF prog-id=163 op=LOAD Dec 16 12:26:37.994000 audit[3335]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3322 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:37.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633731613337626532656166383731646366616135316465663438 Dec 16 12:26:38.012330 kubelet[2731]: E1216 12:26:38.012276 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.012330 kubelet[2731]: W1216 12:26:38.012304 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.012330 kubelet[2731]: E1216 12:26:38.012345 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.012589 kubelet[2731]: E1216 12:26:38.012555 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.012589 kubelet[2731]: W1216 12:26:38.012582 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.012672 kubelet[2731]: E1216 12:26:38.012596 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.012805 kubelet[2731]: E1216 12:26:38.012773 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.012805 kubelet[2731]: W1216 12:26:38.012785 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.012805 kubelet[2731]: E1216 12:26:38.012794 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.012996 kubelet[2731]: E1216 12:26:38.012969 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.012996 kubelet[2731]: W1216 12:26:38.012981 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.013043 kubelet[2731]: E1216 12:26:38.013000 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.013274 kubelet[2731]: E1216 12:26:38.013239 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.013274 kubelet[2731]: W1216 12:26:38.013251 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.013274 kubelet[2731]: E1216 12:26:38.013261 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.013518 kubelet[2731]: E1216 12:26:38.013505 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.013540 kubelet[2731]: W1216 12:26:38.013517 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.013540 kubelet[2731]: E1216 12:26:38.013527 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.013702 kubelet[2731]: E1216 12:26:38.013688 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.013725 kubelet[2731]: W1216 12:26:38.013715 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.013749 kubelet[2731]: E1216 12:26:38.013725 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.013910 kubelet[2731]: E1216 12:26:38.013898 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.013910 kubelet[2731]: W1216 12:26:38.013908 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.013951 kubelet[2731]: E1216 12:26:38.013922 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.014094 kubelet[2731]: E1216 12:26:38.014083 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.014115 kubelet[2731]: W1216 12:26:38.014094 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.014461 kubelet[2731]: E1216 12:26:38.014382 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.014611 kubelet[2731]: E1216 12:26:38.014596 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.014664 kubelet[2731]: W1216 12:26:38.014616 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.014664 kubelet[2731]: E1216 12:26:38.014627 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.014797 kubelet[2731]: E1216 12:26:38.014779 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.014939 kubelet[2731]: W1216 12:26:38.014919 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.014972 kubelet[2731]: E1216 12:26:38.014943 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.015206 kubelet[2731]: E1216 12:26:38.015192 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.015206 kubelet[2731]: W1216 12:26:38.015205 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.015258 kubelet[2731]: E1216 12:26:38.015216 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.015711 kubelet[2731]: E1216 12:26:38.015614 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.015711 kubelet[2731]: W1216 12:26:38.015626 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.015711 kubelet[2731]: E1216 12:26:38.015637 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.015847 kubelet[2731]: E1216 12:26:38.015830 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.015952 kubelet[2731]: W1216 12:26:38.015841 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.015983 kubelet[2731]: E1216 12:26:38.015956 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.016293 kubelet[2731]: E1216 12:26:38.016274 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.016340 kubelet[2731]: W1216 12:26:38.016293 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.016340 kubelet[2731]: E1216 12:26:38.016306 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.016574 kubelet[2731]: E1216 12:26:38.016554 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.016574 kubelet[2731]: W1216 12:26:38.016575 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.016967 kubelet[2731]: E1216 12:26:38.016589 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.016967 kubelet[2731]: E1216 12:26:38.016811 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.016967 kubelet[2731]: W1216 12:26:38.016819 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.016967 kubelet[2731]: E1216 12:26:38.016828 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.016967 kubelet[2731]: E1216 12:26:38.016971 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.017098 kubelet[2731]: W1216 12:26:38.016979 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.017098 kubelet[2731]: E1216 12:26:38.016987 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.017189 kubelet[2731]: E1216 12:26:38.017167 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.017189 kubelet[2731]: W1216 12:26:38.017182 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.017237 kubelet[2731]: E1216 12:26:38.017191 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.017422 kubelet[2731]: E1216 12:26:38.017401 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.017422 kubelet[2731]: W1216 12:26:38.017416 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.017488 kubelet[2731]: E1216 12:26:38.017427 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.017891 kubelet[2731]: E1216 12:26:38.017793 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.017891 kubelet[2731]: W1216 12:26:38.017808 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.017891 kubelet[2731]: E1216 12:26:38.017818 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.018091 kubelet[2731]: E1216 12:26:38.018047 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.018091 kubelet[2731]: W1216 12:26:38.018057 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.018091 kubelet[2731]: E1216 12:26:38.018066 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.018427 kubelet[2731]: E1216 12:26:38.018311 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.018427 kubelet[2731]: W1216 12:26:38.018340 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.018427 kubelet[2731]: E1216 12:26:38.018354 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.018901 kubelet[2731]: E1216 12:26:38.018874 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.018901 kubelet[2731]: W1216 12:26:38.018893 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.018961 kubelet[2731]: E1216 12:26:38.018910 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.019214 kubelet[2731]: E1216 12:26:38.019147 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.019214 kubelet[2731]: W1216 12:26:38.019165 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.019214 kubelet[2731]: E1216 12:26:38.019175 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.033600 containerd[1582]: time="2025-12-16T12:26:38.033538750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mqw9w,Uid:04b625ee-cb3c-43f8-b021-c1b73b3f1ae3,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\"" Dec 16 12:26:38.033990 kubelet[2731]: E1216 12:26:38.033972 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:38.033990 kubelet[2731]: W1216 12:26:38.033988 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:38.034092 kubelet[2731]: E1216 12:26:38.034008 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:38.034356 kubelet[2731]: E1216 12:26:38.034336 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:38.402000 audit[3389]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:38.402000 audit[3389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe394d50 a2=0 a3=1 items=0 ppid=2851 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:38.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:38.416000 audit[3389]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:38.416000 audit[3389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe394d50 a2=0 a3=1 items=0 ppid=2851 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:38.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:38.963802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393312508.mount: Deactivated successfully. Dec 16 12:26:39.468328 containerd[1582]: time="2025-12-16T12:26:39.468260387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:39.468925 containerd[1582]: time="2025-12-16T12:26:39.468868905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 12:26:39.470179 containerd[1582]: time="2025-12-16T12:26:39.470144673Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:39.472657 containerd[1582]: time="2025-12-16T12:26:39.472575225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:39.473368 containerd[1582]: time="2025-12-16T12:26:39.473138174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.561113913s" Dec 16 12:26:39.473368 containerd[1582]: time="2025-12-16T12:26:39.473178862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:26:39.476528 containerd[1582]: time="2025-12-16T12:26:39.476483263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:26:39.491106 containerd[1582]: time="2025-12-16T12:26:39.491041248Z" level=info msg="CreateContainer within sandbox \"09d1ef32d97421550e43678712aa6cd14282537ad92c1e787998f7af409551fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:26:39.527631 containerd[1582]: time="2025-12-16T12:26:39.527577377Z" level=info msg="Container d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:39.541365 containerd[1582]: time="2025-12-16T12:26:39.541284997Z" level=info msg="CreateContainer within sandbox \"09d1ef32d97421550e43678712aa6cd14282537ad92c1e787998f7af409551fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb\"" Dec 16 12:26:39.541963 containerd[1582]: time="2025-12-16T12:26:39.541930082Z" level=info msg="StartContainer for \"d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb\"" Dec 16 12:26:39.544172 containerd[1582]: time="2025-12-16T12:26:39.544083660Z" level=info msg="connecting to shim d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb" address="unix:///run/containerd/s/bfd9638d048b622e7211d583ea26fa4037c53111703f0828325d2f99107ada7f" protocol=ttrpc version=3 Dec 16 12:26:39.569654 systemd[1]: Started cri-containerd-d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb.scope - libcontainer container d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb. Dec 16 12:26:39.583000 audit: BPF prog-id=164 op=LOAD Dec 16 12:26:39.585457 kernel: kauditd_printk_skb: 58 callbacks suppressed Dec 16 12:26:39.585604 kernel: audit: type=1334 audit(1765887999.583:549): prog-id=164 op=LOAD Dec 16 12:26:39.583000 audit: BPF prog-id=165 op=LOAD Dec 16 12:26:39.587137 kernel: audit: type=1334 audit(1765887999.583:550): prog-id=165 op=LOAD Dec 16 12:26:39.587179 kernel: audit: type=1300 audit(1765887999.583:550): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.583000 audit[3400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.594200 kernel: audit: type=1327 audit(1765887999.583:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.594388 kernel: audit: type=1334 audit(1765887999.584:551): prog-id=165 op=UNLOAD Dec 16 12:26:39.584000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:26:39.584000 audit[3400]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.598430 kernel: audit: type=1300 audit(1765887999.584:551): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.598568 kernel: audit: type=1327 audit(1765887999.584:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.584000 audit: BPF prog-id=166 op=LOAD Dec 16 12:26:39.603148 kernel: audit: type=1334 audit(1765887999.584:552): prog-id=166 op=LOAD Dec 16 12:26:39.603260 kernel: audit: type=1300 audit(1765887999.584:552): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.584000 audit[3400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.609988 kernel: audit: type=1327 audit(1765887999.584:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.585000 audit: BPF prog-id=167 op=LOAD Dec 16 12:26:39.585000 audit[3400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.585000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:26:39.585000 audit[3400]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.585000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:26:39.585000 audit[3400]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.585000 audit: BPF prog-id=168 op=LOAD Dec 16 12:26:39.585000 audit[3400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3225 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439336636396461333564663236623633333339383861633034363762 Dec 16 12:26:39.655703 containerd[1582]: time="2025-12-16T12:26:39.655597939Z" level=info msg="StartContainer for \"d93f69da35df26b6333988ac0467bfd161c53b4cbb58e6f5a0d2886e48076afb\" returns successfully" Dec 16 12:26:40.029060 kubelet[2731]: E1216 12:26:40.028997 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:26:40.139766 kubelet[2731]: E1216 12:26:40.139711 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:40.215720 kubelet[2731]: E1216 12:26:40.215669 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.215720 kubelet[2731]: W1216 12:26:40.215697 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.215897 kubelet[2731]: E1216 12:26:40.215734 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.216055 kubelet[2731]: E1216 12:26:40.216013 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.216055 kubelet[2731]: W1216 12:26:40.216029 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.216055 kubelet[2731]: E1216 12:26:40.216041 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.216261 kubelet[2731]: E1216 12:26:40.216248 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.216261 kubelet[2731]: W1216 12:26:40.216259 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.216303 kubelet[2731]: E1216 12:26:40.216270 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.216457 kubelet[2731]: E1216 12:26:40.216445 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.216457 kubelet[2731]: W1216 12:26:40.216455 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.216531 kubelet[2731]: E1216 12:26:40.216464 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.216675 kubelet[2731]: E1216 12:26:40.216662 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.216675 kubelet[2731]: W1216 12:26:40.216672 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.216731 kubelet[2731]: E1216 12:26:40.216682 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.217065 kubelet[2731]: E1216 12:26:40.216998 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.217065 kubelet[2731]: W1216 12:26:40.217050 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.217129 kubelet[2731]: E1216 12:26:40.217094 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.217367 kubelet[2731]: E1216 12:26:40.217355 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.217402 kubelet[2731]: W1216 12:26:40.217366 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.217402 kubelet[2731]: E1216 12:26:40.217375 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.218768 kubelet[2731]: E1216 12:26:40.218740 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.218768 kubelet[2731]: W1216 12:26:40.218756 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.218768 kubelet[2731]: E1216 12:26:40.218769 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.218992 kubelet[2731]: E1216 12:26:40.218981 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.218992 kubelet[2731]: W1216 12:26:40.218992 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.219043 kubelet[2731]: E1216 12:26:40.219000 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.219176 kubelet[2731]: E1216 12:26:40.219166 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.219198 kubelet[2731]: W1216 12:26:40.219176 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.219198 kubelet[2731]: E1216 12:26:40.219184 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.219357 kubelet[2731]: E1216 12:26:40.219346 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.219386 kubelet[2731]: W1216 12:26:40.219356 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.219386 kubelet[2731]: E1216 12:26:40.219365 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.220244 kubelet[2731]: E1216 12:26:40.220212 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.220244 kubelet[2731]: W1216 12:26:40.220226 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.220244 kubelet[2731]: E1216 12:26:40.220237 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.220446 kubelet[2731]: E1216 12:26:40.220433 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.220446 kubelet[2731]: W1216 12:26:40.220445 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.220499 kubelet[2731]: E1216 12:26:40.220457 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.220643 kubelet[2731]: E1216 12:26:40.220632 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.220643 kubelet[2731]: W1216 12:26:40.220642 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.220688 kubelet[2731]: E1216 12:26:40.220650 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.221353 kubelet[2731]: E1216 12:26:40.221297 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.221353 kubelet[2731]: W1216 12:26:40.221311 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.221353 kubelet[2731]: E1216 12:26:40.221330 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.230286 kubelet[2731]: E1216 12:26:40.230247 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.230286 kubelet[2731]: W1216 12:26:40.230271 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.230286 kubelet[2731]: E1216 12:26:40.230289 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.230545 kubelet[2731]: E1216 12:26:40.230515 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.230545 kubelet[2731]: W1216 12:26:40.230528 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.230545 kubelet[2731]: E1216 12:26:40.230537 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.230979 kubelet[2731]: E1216 12:26:40.230957 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.231025 kubelet[2731]: W1216 12:26:40.230978 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.231025 kubelet[2731]: E1216 12:26:40.230994 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.231229 kubelet[2731]: E1216 12:26:40.231216 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.231229 kubelet[2731]: W1216 12:26:40.231227 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.231276 kubelet[2731]: E1216 12:26:40.231238 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.231436 kubelet[2731]: E1216 12:26:40.231426 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.231436 kubelet[2731]: W1216 12:26:40.231436 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.231493 kubelet[2731]: E1216 12:26:40.231444 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.231651 kubelet[2731]: E1216 12:26:40.231637 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.231651 kubelet[2731]: W1216 12:26:40.231648 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.231704 kubelet[2731]: E1216 12:26:40.231656 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.231930 kubelet[2731]: E1216 12:26:40.231896 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.231930 kubelet[2731]: W1216 12:26:40.231914 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.231930 kubelet[2731]: E1216 12:26:40.231927 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.232143 kubelet[2731]: E1216 12:26:40.232129 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.232166 kubelet[2731]: W1216 12:26:40.232143 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.232166 kubelet[2731]: E1216 12:26:40.232152 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.232304 kubelet[2731]: E1216 12:26:40.232294 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.232343 kubelet[2731]: W1216 12:26:40.232305 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.232343 kubelet[2731]: E1216 12:26:40.232313 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.232474 kubelet[2731]: E1216 12:26:40.232462 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.232504 kubelet[2731]: W1216 12:26:40.232474 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.232504 kubelet[2731]: E1216 12:26:40.232481 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.232666 kubelet[2731]: E1216 12:26:40.232653 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.232803 kubelet[2731]: W1216 12:26:40.232666 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.232803 kubelet[2731]: E1216 12:26:40.232674 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.232926 kubelet[2731]: E1216 12:26:40.232903 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.232926 kubelet[2731]: W1216 12:26:40.232922 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.232973 kubelet[2731]: E1216 12:26:40.232936 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.233134 kubelet[2731]: E1216 12:26:40.233122 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.233157 kubelet[2731]: W1216 12:26:40.233134 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.233157 kubelet[2731]: E1216 12:26:40.233143 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.233323 kubelet[2731]: E1216 12:26:40.233305 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.233359 kubelet[2731]: W1216 12:26:40.233328 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.233359 kubelet[2731]: E1216 12:26:40.233337 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.233604 kubelet[2731]: E1216 12:26:40.233592 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.233604 kubelet[2731]: W1216 12:26:40.233602 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.233645 kubelet[2731]: E1216 12:26:40.233612 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.233783 kubelet[2731]: E1216 12:26:40.233772 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.233810 kubelet[2731]: W1216 12:26:40.233783 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.233810 kubelet[2731]: E1216 12:26:40.233794 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.234118 kubelet[2731]: E1216 12:26:40.234038 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.234141 kubelet[2731]: W1216 12:26:40.234121 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.234169 kubelet[2731]: E1216 12:26:40.234140 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.234372 kubelet[2731]: E1216 12:26:40.234359 2731 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:26:40.234403 kubelet[2731]: W1216 12:26:40.234373 2731 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:26:40.234403 kubelet[2731]: E1216 12:26:40.234384 2731 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:26:40.820352 containerd[1582]: time="2025-12-16T12:26:40.819985888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:40.822594 containerd[1582]: time="2025-12-16T12:26:40.822476261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Dec 16 12:26:40.825888 containerd[1582]: time="2025-12-16T12:26:40.825810307Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:40.832854 containerd[1582]: time="2025-12-16T12:26:40.832783456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:40.833640 containerd[1582]: time="2025-12-16T12:26:40.833595923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.357055569s" Dec 16 12:26:40.833640 containerd[1582]: time="2025-12-16T12:26:40.833635451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:26:40.843825 containerd[1582]: time="2025-12-16T12:26:40.843767734Z" level=info msg="CreateContainer within sandbox \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:26:40.859517 containerd[1582]: time="2025-12-16T12:26:40.858719934Z" level=info msg="Container e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:40.873013 containerd[1582]: time="2025-12-16T12:26:40.872832661Z" level=info msg="CreateContainer within sandbox \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55\"" Dec 16 12:26:40.873928 containerd[1582]: time="2025-12-16T12:26:40.873453934Z" level=info msg="StartContainer for \"e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55\"" Dec 16 12:26:40.877128 containerd[1582]: time="2025-12-16T12:26:40.877060631Z" level=info msg="connecting to shim e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55" address="unix:///run/containerd/s/be5c97897981944d6e0559d6e9fb8d4e6a2f0f2d031e471fd39e512e301678d5" protocol=ttrpc version=3 Dec 16 12:26:40.902615 systemd[1]: Started cri-containerd-e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55.scope - libcontainer container e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55. Dec 16 12:26:40.972000 audit: BPF prog-id=169 op=LOAD Dec 16 12:26:40.972000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=3322 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:40.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376633323365623038653362373665623132323838306564643538 Dec 16 12:26:40.972000 audit: BPF prog-id=170 op=LOAD Dec 16 12:26:40.972000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=3322 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:40.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376633323365623038653362373665623132323838306564643538 Dec 16 12:26:40.972000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:26:40.972000 audit[3475]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:40.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376633323365623038653362373665623132323838306564643538 Dec 16 12:26:40.972000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:26:40.972000 audit[3475]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:40.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376633323365623038653362373665623132323838306564643538 Dec 16 12:26:40.972000 audit: BPF prog-id=171 op=LOAD Dec 16 12:26:40.972000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=3322 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:40.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533376633323365623038653362373665623132323838306564643538 Dec 16 12:26:41.000494 containerd[1582]: time="2025-12-16T12:26:40.999420450Z" level=info msg="StartContainer for \"e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55\" returns successfully" Dec 16 12:26:41.017524 systemd[1]: cri-containerd-e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55.scope: Deactivated successfully. Dec 16 12:26:41.018951 containerd[1582]: time="2025-12-16T12:26:41.018903635Z" level=info msg="received container exit event container_id:\"e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55\" id:\"e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55\" pid:3487 exited_at:{seconds:1765888001 nanos:18530291}" Dec 16 12:26:41.023000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:26:41.050271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e37f323eb08e3b76eb122880edd581f684783a95c66f61759f93ac6871e36a55-rootfs.mount: Deactivated successfully. Dec 16 12:26:41.144502 kubelet[2731]: I1216 12:26:41.144374 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:41.144993 kubelet[2731]: E1216 12:26:41.144697 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:41.144993 kubelet[2731]: E1216 12:26:41.144835 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:41.147109 containerd[1582]: time="2025-12-16T12:26:41.147069333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:26:41.168405 kubelet[2731]: I1216 12:26:41.168236 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-698945d8f4-hclzx" podStartSLOduration=2.605353073 podStartE2EDuration="4.16821822s" podCreationTimestamp="2025-12-16 12:26:37 +0000 UTC" firstStartedPulling="2025-12-16 12:26:37.911483822 +0000 UTC m=+29.000468655" lastFinishedPulling="2025-12-16 12:26:39.474348929 +0000 UTC m=+30.563333802" observedRunningTime="2025-12-16 12:26:40.154288985 +0000 UTC m=+31.243273858" watchObservedRunningTime="2025-12-16 12:26:41.16821822 +0000 UTC m=+32.257203093" Dec 16 12:26:42.033054 kubelet[2731]: E1216 12:26:42.032975 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:26:42.925631 containerd[1582]: time="2025-12-16T12:26:42.925563808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:42.929345 containerd[1582]: time="2025-12-16T12:26:42.929263527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:42.931503 containerd[1582]: time="2025-12-16T12:26:42.931385574Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:42.932924 containerd[1582]: time="2025-12-16T12:26:42.932854107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:42.933477 containerd[1582]: time="2025-12-16T12:26:42.933429087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.786316106s" Dec 16 12:26:42.933477 containerd[1582]: time="2025-12-16T12:26:42.933461972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:26:42.938109 containerd[1582]: time="2025-12-16T12:26:42.938063167Z" level=info msg="CreateContainer within sandbox \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:26:42.987365 containerd[1582]: time="2025-12-16T12:26:42.987003821Z" level=info msg="Container c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:43.043671 containerd[1582]: time="2025-12-16T12:26:43.043606999Z" level=info msg="CreateContainer within sandbox \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89\"" Dec 16 12:26:43.044715 containerd[1582]: time="2025-12-16T12:26:43.044668161Z" level=info msg="StartContainer for \"c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89\"" Dec 16 12:26:43.047363 containerd[1582]: time="2025-12-16T12:26:43.047290864Z" level=info msg="connecting to shim c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89" address="unix:///run/containerd/s/be5c97897981944d6e0559d6e9fb8d4e6a2f0f2d031e471fd39e512e301678d5" protocol=ttrpc version=3 Dec 16 12:26:43.073613 systemd[1]: Started cri-containerd-c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89.scope - libcontainer container c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89. Dec 16 12:26:43.133000 audit: BPF prog-id=172 op=LOAD Dec 16 12:26:43.133000 audit[3534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3322 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:43.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343730383530393531303235313265623731353063346265383532 Dec 16 12:26:43.134000 audit: BPF prog-id=173 op=LOAD Dec 16 12:26:43.134000 audit[3534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3322 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:43.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343730383530393531303235313265623731353063346265383532 Dec 16 12:26:43.134000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:26:43.134000 audit[3534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:43.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343730383530393531303235313265623731353063346265383532 Dec 16 12:26:43.134000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:26:43.134000 audit[3534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:43.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343730383530393531303235313265623731353063346265383532 Dec 16 12:26:43.134000 audit: BPF prog-id=174 op=LOAD Dec 16 12:26:43.134000 audit[3534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3322 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:43.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331343730383530393531303235313265623731353063346265383532 Dec 16 12:26:43.154550 containerd[1582]: time="2025-12-16T12:26:43.154502258Z" level=info msg="StartContainer for \"c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89\" returns successfully" Dec 16 12:26:43.164685 kubelet[2731]: E1216 12:26:43.164626 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:43.874832 systemd[1]: cri-containerd-c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89.scope: Deactivated successfully. Dec 16 12:26:43.875282 systemd[1]: cri-containerd-c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89.scope: Consumed 526ms CPU time, 178.6M memory peak, 2.5M read from disk, 165.9M written to disk. Dec 16 12:26:43.877771 containerd[1582]: time="2025-12-16T12:26:43.877644812Z" level=info msg="received container exit event container_id:\"c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89\" id:\"c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89\" pid:3547 exited_at:{seconds:1765888003 nanos:876605612}" Dec 16 12:26:43.878000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:26:43.902194 kubelet[2731]: I1216 12:26:43.902159 2731 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:26:43.903737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c147085095102512eb7150c4be85273a5f0e255f56dc2479405baafa917cfa89-rootfs.mount: Deactivated successfully. Dec 16 12:26:44.157128 systemd[1]: Created slice kubepods-besteffort-pod8ba61881_a1a2_472c_ad8a_7b1172620126.slice - libcontainer container kubepods-besteffort-pod8ba61881_a1a2_472c_ad8a_7b1172620126.slice. Dec 16 12:26:44.163709 kubelet[2731]: I1216 12:26:44.163665 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1efc62-d121-4b86-8964-6178a6812710-config-volume\") pod \"coredns-66bc5c9577-z6rv6\" (UID: \"cf1efc62-d121-4b86-8964-6178a6812710\") " pod="kube-system/coredns-66bc5c9577-z6rv6" Dec 16 12:26:44.163709 kubelet[2731]: I1216 12:26:44.163718 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d237c895-0922-4d34-99eb-c1d5a8780e41-calico-apiserver-certs\") pod \"calico-apiserver-86f4fcfc8d-xt2gx\" (UID: \"d237c895-0922-4d34-99eb-c1d5a8780e41\") " pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" Dec 16 12:26:44.163944 kubelet[2731]: I1216 12:26:44.163741 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfs87\" (UniqueName: \"kubernetes.io/projected/b9fe45d7-db81-4803-93c7-fa2d49931f66-kube-api-access-pfs87\") pod \"coredns-66bc5c9577-6zw4p\" (UID: \"b9fe45d7-db81-4803-93c7-fa2d49931f66\") " pod="kube-system/coredns-66bc5c9577-6zw4p" Dec 16 12:26:44.163944 kubelet[2731]: I1216 12:26:44.163759 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvpl\" (UniqueName: \"kubernetes.io/projected/cf1efc62-d121-4b86-8964-6178a6812710-kube-api-access-jtvpl\") pod \"coredns-66bc5c9577-z6rv6\" (UID: \"cf1efc62-d121-4b86-8964-6178a6812710\") " pod="kube-system/coredns-66bc5c9577-z6rv6" Dec 16 12:26:44.163944 kubelet[2731]: I1216 12:26:44.163778 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c0a4e673-b12d-498c-8d03-d2574bb6b967-calico-apiserver-certs\") pod \"calico-apiserver-86f4fcfc8d-7nnp6\" (UID: \"c0a4e673-b12d-498c-8d03-d2574bb6b967\") " pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" Dec 16 12:26:44.163944 kubelet[2731]: I1216 12:26:44.163801 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9fe45d7-db81-4803-93c7-fa2d49931f66-config-volume\") pod \"coredns-66bc5c9577-6zw4p\" (UID: \"b9fe45d7-db81-4803-93c7-fa2d49931f66\") " pod="kube-system/coredns-66bc5c9577-6zw4p" Dec 16 12:26:44.163944 kubelet[2731]: I1216 12:26:44.163818 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsx9c\" (UniqueName: \"kubernetes.io/projected/c0a4e673-b12d-498c-8d03-d2574bb6b967-kube-api-access-gsx9c\") pod \"calico-apiserver-86f4fcfc8d-7nnp6\" (UID: \"c0a4e673-b12d-498c-8d03-d2574bb6b967\") " pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" Dec 16 12:26:44.164059 kubelet[2731]: I1216 12:26:44.163842 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzcd8\" (UniqueName: \"kubernetes.io/projected/d237c895-0922-4d34-99eb-c1d5a8780e41-kube-api-access-jzcd8\") pod \"calico-apiserver-86f4fcfc8d-xt2gx\" (UID: \"d237c895-0922-4d34-99eb-c1d5a8780e41\") " pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" Dec 16 12:26:44.165404 systemd[1]: Created slice kubepods-burstable-podb9fe45d7_db81_4803_93c7_fa2d49931f66.slice - libcontainer container kubepods-burstable-podb9fe45d7_db81_4803_93c7_fa2d49931f66.slice. Dec 16 12:26:44.172101 containerd[1582]: time="2025-12-16T12:26:44.172054134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dw5m,Uid:8ba61881-a1a2-472c-ad8a-7b1172620126,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:44.181208 systemd[1]: Created slice kubepods-burstable-podcf1efc62_d121_4b86_8964_6178a6812710.slice - libcontainer container kubepods-burstable-podcf1efc62_d121_4b86_8964_6178a6812710.slice. Dec 16 12:26:44.182838 kubelet[2731]: E1216 12:26:44.182449 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:44.186683 containerd[1582]: time="2025-12-16T12:26:44.185169608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:26:44.197797 systemd[1]: Created slice kubepods-besteffort-podc0a4e673_b12d_498c_8d03_d2574bb6b967.slice - libcontainer container kubepods-besteffort-podc0a4e673_b12d_498c_8d03_d2574bb6b967.slice. Dec 16 12:26:44.212070 systemd[1]: Created slice kubepods-besteffort-podd237c895_0922_4d34_99eb_c1d5a8780e41.slice - libcontainer container kubepods-besteffort-podd237c895_0922_4d34_99eb_c1d5a8780e41.slice. Dec 16 12:26:44.218536 systemd[1]: Created slice kubepods-besteffort-pod3e707694_c3c3_46cb_9c34_819568db9981.slice - libcontainer container kubepods-besteffort-pod3e707694_c3c3_46cb_9c34_819568db9981.slice. Dec 16 12:26:44.224718 systemd[1]: Created slice kubepods-besteffort-podcb75681b_286c_4a6b_a7e3_6df9c7f59d30.slice - libcontainer container kubepods-besteffort-podcb75681b_286c_4a6b_a7e3_6df9c7f59d30.slice. Dec 16 12:26:44.229207 systemd[1]: Created slice kubepods-besteffort-pode2a5d33b_e024_4409_ab04_3cedb6623ebf.slice - libcontainer container kubepods-besteffort-pode2a5d33b_e024_4409_ab04_3cedb6623ebf.slice. Dec 16 12:26:44.234815 systemd[1]: Created slice kubepods-besteffort-pode774688f_af9f_49ba_93f6_0e9b13337ee0.slice - libcontainer container kubepods-besteffort-pode774688f_af9f_49ba_93f6_0e9b13337ee0.slice. Dec 16 12:26:44.264225 kubelet[2731]: I1216 12:26:44.264150 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb75681b-286c-4a6b-a7e3-6df9c7f59d30-tigera-ca-bundle\") pod \"calico-kube-controllers-686cd7448d-zpgvc\" (UID: \"cb75681b-286c-4a6b-a7e3-6df9c7f59d30\") " pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" Dec 16 12:26:44.264497 kubelet[2731]: I1216 12:26:44.264288 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e707694-c3c3-46cb-9c34-819568db9981-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-cbqqb\" (UID: \"3e707694-c3c3-46cb-9c34-819568db9981\") " pod="calico-system/goldmane-7c778bb748-cbqqb" Dec 16 12:26:44.264497 kubelet[2731]: I1216 12:26:44.264344 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkz9\" (UniqueName: \"kubernetes.io/projected/cb75681b-286c-4a6b-a7e3-6df9c7f59d30-kube-api-access-njkz9\") pod \"calico-kube-controllers-686cd7448d-zpgvc\" (UID: \"cb75681b-286c-4a6b-a7e3-6df9c7f59d30\") " pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" Dec 16 12:26:44.264743 kubelet[2731]: I1216 12:26:44.264629 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e707694-c3c3-46cb-9c34-819568db9981-config\") pod \"goldmane-7c778bb748-cbqqb\" (UID: \"3e707694-c3c3-46cb-9c34-819568db9981\") " pod="calico-system/goldmane-7c778bb748-cbqqb" Dec 16 12:26:44.264743 kubelet[2731]: I1216 12:26:44.264660 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtp4\" (UniqueName: \"kubernetes.io/projected/e774688f-af9f-49ba-93f6-0e9b13337ee0-kube-api-access-wvtp4\") pod \"calico-apiserver-6b7bcbfb5-5mqzg\" (UID: \"e774688f-af9f-49ba-93f6-0e9b13337ee0\") " pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" Dec 16 12:26:44.264987 kubelet[2731]: I1216 12:26:44.264716 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-backend-key-pair\") pod \"whisker-5497fbb54b-cv47h\" (UID: \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\") " pod="calico-system/whisker-5497fbb54b-cv47h" Dec 16 12:26:44.265104 kubelet[2731]: I1216 12:26:44.265088 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e774688f-af9f-49ba-93f6-0e9b13337ee0-calico-apiserver-certs\") pod \"calico-apiserver-6b7bcbfb5-5mqzg\" (UID: \"e774688f-af9f-49ba-93f6-0e9b13337ee0\") " pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" Dec 16 12:26:44.265505 kubelet[2731]: I1216 12:26:44.265450 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wmvz\" (UniqueName: \"kubernetes.io/projected/3e707694-c3c3-46cb-9c34-819568db9981-kube-api-access-5wmvz\") pod \"goldmane-7c778bb748-cbqqb\" (UID: \"3e707694-c3c3-46cb-9c34-819568db9981\") " pod="calico-system/goldmane-7c778bb748-cbqqb" Dec 16 12:26:44.265726 kubelet[2731]: I1216 12:26:44.265691 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-ca-bundle\") pod \"whisker-5497fbb54b-cv47h\" (UID: \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\") " pod="calico-system/whisker-5497fbb54b-cv47h" Dec 16 12:26:44.266534 kubelet[2731]: I1216 12:26:44.266505 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3e707694-c3c3-46cb-9c34-819568db9981-goldmane-key-pair\") pod \"goldmane-7c778bb748-cbqqb\" (UID: \"3e707694-c3c3-46cb-9c34-819568db9981\") " pod="calico-system/goldmane-7c778bb748-cbqqb" Dec 16 12:26:44.266914 kubelet[2731]: I1216 12:26:44.266861 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfqd\" (UniqueName: \"kubernetes.io/projected/e2a5d33b-e024-4409-ab04-3cedb6623ebf-kube-api-access-njfqd\") pod \"whisker-5497fbb54b-cv47h\" (UID: \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\") " pod="calico-system/whisker-5497fbb54b-cv47h" Dec 16 12:26:44.309077 containerd[1582]: time="2025-12-16T12:26:44.306669983Z" level=error msg="Failed to destroy network for sandbox \"902485443c7ab795d7018f904094fb87f7f2b138f60b72ca70352493499b6da0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.308449 systemd[1]: run-netns-cni\x2d1423bebe\x2d0e5d\x2dff3d\x2d2d21\x2d87d537b5b0d6.mount: Deactivated successfully. Dec 16 12:26:44.361070 containerd[1582]: time="2025-12-16T12:26:44.361012130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dw5m,Uid:8ba61881-a1a2-472c-ad8a-7b1172620126,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"902485443c7ab795d7018f904094fb87f7f2b138f60b72ca70352493499b6da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.366083 kubelet[2731]: E1216 12:26:44.366007 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"902485443c7ab795d7018f904094fb87f7f2b138f60b72ca70352493499b6da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.366201 kubelet[2731]: E1216 12:26:44.366111 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"902485443c7ab795d7018f904094fb87f7f2b138f60b72ca70352493499b6da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:44.366201 kubelet[2731]: E1216 12:26:44.366135 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"902485443c7ab795d7018f904094fb87f7f2b138f60b72ca70352493499b6da0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8dw5m" Dec 16 12:26:44.366363 kubelet[2731]: E1216 12:26:44.366204 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"902485443c7ab795d7018f904094fb87f7f2b138f60b72ca70352493499b6da0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:26:44.475867 kubelet[2731]: E1216 12:26:44.475764 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:44.476704 containerd[1582]: time="2025-12-16T12:26:44.476430582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6zw4p,Uid:b9fe45d7-db81-4803-93c7-fa2d49931f66,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:44.492890 kubelet[2731]: E1216 12:26:44.492839 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:44.493663 containerd[1582]: time="2025-12-16T12:26:44.493623874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z6rv6,Uid:cf1efc62-d121-4b86-8964-6178a6812710,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:44.507606 containerd[1582]: time="2025-12-16T12:26:44.507554560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-7nnp6,Uid:c0a4e673-b12d-498c-8d03-d2574bb6b967,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:44.518539 containerd[1582]: time="2025-12-16T12:26:44.518378376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-xt2gx,Uid:d237c895-0922-4d34-99eb-c1d5a8780e41,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:44.525456 containerd[1582]: time="2025-12-16T12:26:44.525405726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cbqqb,Uid:3e707694-c3c3-46cb-9c34-819568db9981,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:44.533690 containerd[1582]: time="2025-12-16T12:26:44.533637371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686cd7448d-zpgvc,Uid:cb75681b-286c-4a6b-a7e3-6df9c7f59d30,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:44.537202 containerd[1582]: time="2025-12-16T12:26:44.537137564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5497fbb54b-cv47h,Uid:e2a5d33b-e024-4409-ab04-3cedb6623ebf,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:44.542462 containerd[1582]: time="2025-12-16T12:26:44.542395355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7bcbfb5-5mqzg,Uid:e774688f-af9f-49ba-93f6-0e9b13337ee0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:44.548794 containerd[1582]: time="2025-12-16T12:26:44.548682262Z" level=error msg="Failed to destroy network for sandbox \"f684ae42ce4ff5d315929cb8000d5332f4bd2a5fb378a5407446d5b36af67d5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.579354 containerd[1582]: time="2025-12-16T12:26:44.579277020Z" level=error msg="Failed to destroy network for sandbox \"367fb664fbb1a5c90bfef2b06b7e41c071bf1483d0d1d247a8918ba8524b6574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.581733 containerd[1582]: time="2025-12-16T12:26:44.581667089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6zw4p,Uid:b9fe45d7-db81-4803-93c7-fa2d49931f66,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f684ae42ce4ff5d315929cb8000d5332f4bd2a5fb378a5407446d5b36af67d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.583109 kubelet[2731]: E1216 12:26:44.581949 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f684ae42ce4ff5d315929cb8000d5332f4bd2a5fb378a5407446d5b36af67d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.583109 kubelet[2731]: E1216 12:26:44.582024 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f684ae42ce4ff5d315929cb8000d5332f4bd2a5fb378a5407446d5b36af67d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6zw4p" Dec 16 12:26:44.583109 kubelet[2731]: E1216 12:26:44.582048 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f684ae42ce4ff5d315929cb8000d5332f4bd2a5fb378a5407446d5b36af67d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6zw4p" Dec 16 12:26:44.583568 kubelet[2731]: E1216 12:26:44.582107 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6zw4p_kube-system(b9fe45d7-db81-4803-93c7-fa2d49931f66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6zw4p_kube-system(b9fe45d7-db81-4803-93c7-fa2d49931f66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f684ae42ce4ff5d315929cb8000d5332f4bd2a5fb378a5407446d5b36af67d5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6zw4p" podUID="b9fe45d7-db81-4803-93c7-fa2d49931f66" Dec 16 12:26:44.613407 containerd[1582]: time="2025-12-16T12:26:44.613225036Z" level=error msg="Failed to destroy network for sandbox \"aeaeb2cb215f83b7a02684a8f3b26ad9cf536c66cb06d3d94fbe332a57e54d74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.614696 containerd[1582]: time="2025-12-16T12:26:44.614635354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z6rv6,Uid:cf1efc62-d121-4b86-8964-6178a6812710,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"367fb664fbb1a5c90bfef2b06b7e41c071bf1483d0d1d247a8918ba8524b6574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.615179 kubelet[2731]: E1216 12:26:44.615132 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"367fb664fbb1a5c90bfef2b06b7e41c071bf1483d0d1d247a8918ba8524b6574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.615274 kubelet[2731]: E1216 12:26:44.615206 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"367fb664fbb1a5c90bfef2b06b7e41c071bf1483d0d1d247a8918ba8524b6574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-z6rv6" Dec 16 12:26:44.615274 kubelet[2731]: E1216 12:26:44.615226 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"367fb664fbb1a5c90bfef2b06b7e41c071bf1483d0d1d247a8918ba8524b6574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-z6rv6" Dec 16 12:26:44.615331 kubelet[2731]: E1216 12:26:44.615281 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-z6rv6_kube-system(cf1efc62-d121-4b86-8964-6178a6812710)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-z6rv6_kube-system(cf1efc62-d121-4b86-8964-6178a6812710)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"367fb664fbb1a5c90bfef2b06b7e41c071bf1483d0d1d247a8918ba8524b6574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-z6rv6" podUID="cf1efc62-d121-4b86-8964-6178a6812710" Dec 16 12:26:44.618552 containerd[1582]: time="2025-12-16T12:26:44.618483347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-7nnp6,Uid:c0a4e673-b12d-498c-8d03-d2574bb6b967,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeaeb2cb215f83b7a02684a8f3b26ad9cf536c66cb06d3d94fbe332a57e54d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.619260 kubelet[2731]: E1216 12:26:44.618832 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeaeb2cb215f83b7a02684a8f3b26ad9cf536c66cb06d3d94fbe332a57e54d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.619260 kubelet[2731]: E1216 12:26:44.618923 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeaeb2cb215f83b7a02684a8f3b26ad9cf536c66cb06d3d94fbe332a57e54d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" Dec 16 12:26:44.619260 kubelet[2731]: E1216 12:26:44.618945 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeaeb2cb215f83b7a02684a8f3b26ad9cf536c66cb06d3d94fbe332a57e54d74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" Dec 16 12:26:44.619388 kubelet[2731]: E1216 12:26:44.619006 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f4fcfc8d-7nnp6_calico-apiserver(c0a4e673-b12d-498c-8d03-d2574bb6b967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f4fcfc8d-7nnp6_calico-apiserver(c0a4e673-b12d-498c-8d03-d2574bb6b967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeaeb2cb215f83b7a02684a8f3b26ad9cf536c66cb06d3d94fbe332a57e54d74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:26:44.623696 containerd[1582]: time="2025-12-16T12:26:44.623581720Z" level=error msg="Failed to destroy network for sandbox \"793d00c441ab9c6eabdcf5bc430617c7121ee594377953b6a5830c6545317b2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.630082 containerd[1582]: time="2025-12-16T12:26:44.630013323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-xt2gx,Uid:d237c895-0922-4d34-99eb-c1d5a8780e41,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d00c441ab9c6eabdcf5bc430617c7121ee594377953b6a5830c6545317b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.630700 kubelet[2731]: E1216 12:26:44.630636 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d00c441ab9c6eabdcf5bc430617c7121ee594377953b6a5830c6545317b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.630794 kubelet[2731]: E1216 12:26:44.630735 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d00c441ab9c6eabdcf5bc430617c7121ee594377953b6a5830c6545317b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" Dec 16 12:26:44.630794 kubelet[2731]: E1216 12:26:44.630757 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d00c441ab9c6eabdcf5bc430617c7121ee594377953b6a5830c6545317b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" Dec 16 12:26:44.631168 kubelet[2731]: E1216 12:26:44.631113 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f4fcfc8d-xt2gx_calico-apiserver(d237c895-0922-4d34-99eb-c1d5a8780e41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f4fcfc8d-xt2gx_calico-apiserver(d237c895-0922-4d34-99eb-c1d5a8780e41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"793d00c441ab9c6eabdcf5bc430617c7121ee594377953b6a5830c6545317b2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" podUID="d237c895-0922-4d34-99eb-c1d5a8780e41" Dec 16 12:26:44.640773 containerd[1582]: time="2025-12-16T12:26:44.640702484Z" level=error msg="Failed to destroy network for sandbox \"53773be455ce7bcf137758ed7a27d41362d85c9144de4afc2f919f8c7294e539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.660415 containerd[1582]: time="2025-12-16T12:26:44.660364814Z" level=error msg="Failed to destroy network for sandbox \"d8d9df9c21f193004bd64e1ed9901a8f6c0eb847dd1fd298b1c4e551296172cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.671564 containerd[1582]: time="2025-12-16T12:26:44.671496465Z" level=error msg="Failed to destroy network for sandbox \"8978c488d0a58b7ae86a071fa282ec43a37ca74b24824a67275fa653a62877c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.672591 containerd[1582]: time="2025-12-16T12:26:44.672534781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5497fbb54b-cv47h,Uid:e2a5d33b-e024-4409-ab04-3cedb6623ebf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9df9c21f193004bd64e1ed9901a8f6c0eb847dd1fd298b1c4e551296172cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.673134 kubelet[2731]: E1216 12:26:44.672985 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9df9c21f193004bd64e1ed9901a8f6c0eb847dd1fd298b1c4e551296172cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.673239 kubelet[2731]: E1216 12:26:44.673183 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9df9c21f193004bd64e1ed9901a8f6c0eb847dd1fd298b1c4e551296172cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5497fbb54b-cv47h" Dec 16 12:26:44.673239 kubelet[2731]: E1216 12:26:44.673217 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d9df9c21f193004bd64e1ed9901a8f6c0eb847dd1fd298b1c4e551296172cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5497fbb54b-cv47h" Dec 16 12:26:44.673417 kubelet[2731]: E1216 12:26:44.673365 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5497fbb54b-cv47h_calico-system(e2a5d33b-e024-4409-ab04-3cedb6623ebf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5497fbb54b-cv47h_calico-system(e2a5d33b-e024-4409-ab04-3cedb6623ebf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d9df9c21f193004bd64e1ed9901a8f6c0eb847dd1fd298b1c4e551296172cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5497fbb54b-cv47h" podUID="e2a5d33b-e024-4409-ab04-3cedb6623ebf" Dec 16 12:26:44.674685 containerd[1582]: time="2025-12-16T12:26:44.674635457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cbqqb,Uid:3e707694-c3c3-46cb-9c34-819568db9981,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53773be455ce7bcf137758ed7a27d41362d85c9144de4afc2f919f8c7294e539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.675266 kubelet[2731]: E1216 12:26:44.675049 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53773be455ce7bcf137758ed7a27d41362d85c9144de4afc2f919f8c7294e539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.675266 kubelet[2731]: E1216 12:26:44.675132 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53773be455ce7bcf137758ed7a27d41362d85c9144de4afc2f919f8c7294e539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-cbqqb" Dec 16 12:26:44.675266 kubelet[2731]: E1216 12:26:44.675153 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53773be455ce7bcf137758ed7a27d41362d85c9144de4afc2f919f8c7294e539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-cbqqb" Dec 16 12:26:44.675419 kubelet[2731]: E1216 12:26:44.675217 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-cbqqb_calico-system(3e707694-c3c3-46cb-9c34-819568db9981)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-cbqqb_calico-system(3e707694-c3c3-46cb-9c34-819568db9981)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53773be455ce7bcf137758ed7a27d41362d85c9144de4afc2f919f8c7294e539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:26:44.675806 containerd[1582]: time="2025-12-16T12:26:44.675739182Z" level=error msg="Failed to destroy network for sandbox \"ae50b50457f8110a556633eaed6e62613d632494aeb5eb3b7f0c558c5633447f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.677248 containerd[1582]: time="2025-12-16T12:26:44.677121257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686cd7448d-zpgvc,Uid:cb75681b-286c-4a6b-a7e3-6df9c7f59d30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8978c488d0a58b7ae86a071fa282ec43a37ca74b24824a67275fa653a62877c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.677925 kubelet[2731]: E1216 12:26:44.677790 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8978c488d0a58b7ae86a071fa282ec43a37ca74b24824a67275fa653a62877c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.677990 kubelet[2731]: E1216 12:26:44.677951 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8978c488d0a58b7ae86a071fa282ec43a37ca74b24824a67275fa653a62877c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" Dec 16 12:26:44.678084 kubelet[2731]: E1216 12:26:44.678064 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8978c488d0a58b7ae86a071fa282ec43a37ca74b24824a67275fa653a62877c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" Dec 16 12:26:44.678546 kubelet[2731]: E1216 12:26:44.678129 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-686cd7448d-zpgvc_calico-system(cb75681b-286c-4a6b-a7e3-6df9c7f59d30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-686cd7448d-zpgvc_calico-system(cb75681b-286c-4a6b-a7e3-6df9c7f59d30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8978c488d0a58b7ae86a071fa282ec43a37ca74b24824a67275fa653a62877c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:26:44.680500 containerd[1582]: time="2025-12-16T12:26:44.680349140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7bcbfb5-5mqzg,Uid:e774688f-af9f-49ba-93f6-0e9b13337ee0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae50b50457f8110a556633eaed6e62613d632494aeb5eb3b7f0c558c5633447f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.680959 kubelet[2731]: E1216 12:26:44.680891 2731 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae50b50457f8110a556633eaed6e62613d632494aeb5eb3b7f0c558c5633447f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:26:44.681022 kubelet[2731]: E1216 12:26:44.680981 2731 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae50b50457f8110a556633eaed6e62613d632494aeb5eb3b7f0c558c5633447f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" Dec 16 12:26:44.681055 kubelet[2731]: E1216 12:26:44.681027 2731 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae50b50457f8110a556633eaed6e62613d632494aeb5eb3b7f0c558c5633447f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" Dec 16 12:26:44.681160 kubelet[2731]: E1216 12:26:44.681102 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7bcbfb5-5mqzg_calico-apiserver(e774688f-af9f-49ba-93f6-0e9b13337ee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7bcbfb5-5mqzg_calico-apiserver(e774688f-af9f-49ba-93f6-0e9b13337ee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae50b50457f8110a556633eaed6e62613d632494aeb5eb3b7f0c558c5633447f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" podUID="e774688f-af9f-49ba-93f6-0e9b13337ee0" Dec 16 12:26:47.268204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733187022.mount: Deactivated successfully. Dec 16 12:26:47.560895 containerd[1582]: time="2025-12-16T12:26:47.560386930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:47.561947 containerd[1582]: time="2025-12-16T12:26:47.561873724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:26:47.563186 containerd[1582]: time="2025-12-16T12:26:47.563108252Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:47.565149 containerd[1582]: time="2025-12-16T12:26:47.565073895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:47.565986 containerd[1582]: time="2025-12-16T12:26:47.565657875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.380439302s" Dec 16 12:26:47.565986 containerd[1582]: time="2025-12-16T12:26:47.565703840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:26:47.587796 containerd[1582]: time="2025-12-16T12:26:47.587712794Z" level=info msg="CreateContainer within sandbox \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:26:47.613785 containerd[1582]: time="2025-12-16T12:26:47.613737804Z" level=info msg="Container 7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:47.631997 containerd[1582]: time="2025-12-16T12:26:47.631922323Z" level=info msg="CreateContainer within sandbox \"5bc71a37be2eaf871dcfaa51def484c23e480f8fac289aa6767bc908553beff5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2\"" Dec 16 12:26:47.632618 containerd[1582]: time="2025-12-16T12:26:47.632588552Z" level=info msg="StartContainer for \"7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2\"" Dec 16 12:26:47.634467 containerd[1582]: time="2025-12-16T12:26:47.634382778Z" level=info msg="connecting to shim 7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2" address="unix:///run/containerd/s/be5c97897981944d6e0559d6e9fb8d4e6a2f0f2d031e471fd39e512e301678d5" protocol=ttrpc version=3 Dec 16 12:26:47.662699 systemd[1]: Started cri-containerd-7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2.scope - libcontainer container 7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2. Dec 16 12:26:47.735000 audit: BPF prog-id=175 op=LOAD Dec 16 12:26:47.737830 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 16 12:26:47.737943 kernel: audit: type=1334 audit(1765888007.735:569): prog-id=175 op=LOAD Dec 16 12:26:47.735000 audit[3899]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.745730 kernel: audit: type=1300 audit(1765888007.735:569): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.750819 kernel: audit: type=1327 audit(1765888007.735:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.750905 kernel: audit: type=1334 audit(1765888007.735:570): prog-id=176 op=LOAD Dec 16 12:26:47.735000 audit: BPF prog-id=176 op=LOAD Dec 16 12:26:47.751670 kernel: audit: type=1300 audit(1765888007.735:570): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.735000 audit[3899]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.758984 kernel: audit: type=1327 audit(1765888007.735:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.759104 kernel: audit: type=1334 audit(1765888007.736:571): prog-id=176 op=UNLOAD Dec 16 12:26:47.736000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:26:47.736000 audit[3899]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.763110 kernel: audit: type=1300 audit(1765888007.736:571): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.763177 kernel: audit: type=1327 audit(1765888007.736:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.736000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:26:47.767420 kernel: audit: type=1334 audit(1765888007.736:572): prog-id=175 op=UNLOAD Dec 16 12:26:47.736000 audit[3899]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.736000 audit: BPF prog-id=177 op=LOAD Dec 16 12:26:47.736000 audit[3899]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3322 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:47.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761386263343231633262643937393866356131633566653765653162 Dec 16 12:26:47.792357 containerd[1582]: time="2025-12-16T12:26:47.790562158Z" level=info msg="StartContainer for \"7a8bc421c2bd9798f5a1c5fe7ee1b789873363026e351123eeba9fbf261099f2\" returns successfully" Dec 16 12:26:47.939754 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:26:47.939886 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:26:48.194869 kubelet[2731]: I1216 12:26:48.194663 2731 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njfqd\" (UniqueName: \"kubernetes.io/projected/e2a5d33b-e024-4409-ab04-3cedb6623ebf-kube-api-access-njfqd\") pod \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\" (UID: \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\") " Dec 16 12:26:48.196400 kubelet[2731]: I1216 12:26:48.195144 2731 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-ca-bundle\") pod \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\" (UID: \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\") " Dec 16 12:26:48.196400 kubelet[2731]: I1216 12:26:48.195196 2731 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-backend-key-pair\") pod \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\" (UID: \"e2a5d33b-e024-4409-ab04-3cedb6623ebf\") " Dec 16 12:26:48.207760 kubelet[2731]: E1216 12:26:48.207394 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:48.221851 kubelet[2731]: I1216 12:26:48.221727 2731 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a5d33b-e024-4409-ab04-3cedb6623ebf-kube-api-access-njfqd" (OuterVolumeSpecName: "kube-api-access-njfqd") pod "e2a5d33b-e024-4409-ab04-3cedb6623ebf" (UID: "e2a5d33b-e024-4409-ab04-3cedb6623ebf"). InnerVolumeSpecName "kube-api-access-njfqd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:26:48.228289 kubelet[2731]: I1216 12:26:48.228239 2731 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e2a5d33b-e024-4409-ab04-3cedb6623ebf" (UID: "e2a5d33b-e024-4409-ab04-3cedb6623ebf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:26:48.229259 kubelet[2731]: I1216 12:26:48.229221 2731 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e2a5d33b-e024-4409-ab04-3cedb6623ebf" (UID: "e2a5d33b-e024-4409-ab04-3cedb6623ebf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:26:48.243239 kubelet[2731]: I1216 12:26:48.243083 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mqw9w" podStartSLOduration=1.7024864480000002 podStartE2EDuration="11.234219109s" podCreationTimestamp="2025-12-16 12:26:37 +0000 UTC" firstStartedPulling="2025-12-16 12:26:38.034931238 +0000 UTC m=+29.123916111" lastFinishedPulling="2025-12-16 12:26:47.566663939 +0000 UTC m=+38.655648772" observedRunningTime="2025-12-16 12:26:48.233396866 +0000 UTC m=+39.322381739" watchObservedRunningTime="2025-12-16 12:26:48.234219109 +0000 UTC m=+39.323203942" Dec 16 12:26:48.269202 systemd[1]: var-lib-kubelet-pods-e2a5d33b\x2de024\x2d4409\x2dab04\x2d3cedb6623ebf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnjfqd.mount: Deactivated successfully. Dec 16 12:26:48.269332 systemd[1]: var-lib-kubelet-pods-e2a5d33b\x2de024\x2d4409\x2dab04\x2d3cedb6623ebf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:26:48.296940 kubelet[2731]: I1216 12:26:48.296401 2731 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-njfqd\" (UniqueName: \"kubernetes.io/projected/e2a5d33b-e024-4409-ab04-3cedb6623ebf-kube-api-access-njfqd\") on node \"localhost\" DevicePath \"\"" Dec 16 12:26:48.296940 kubelet[2731]: I1216 12:26:48.296896 2731 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:26:48.296940 kubelet[2731]: I1216 12:26:48.296913 2731 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2a5d33b-e024-4409-ab04-3cedb6623ebf-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:26:48.512497 systemd[1]: Removed slice kubepods-besteffort-pode2a5d33b_e024_4409_ab04_3cedb6623ebf.slice - libcontainer container kubepods-besteffort-pode2a5d33b_e024_4409_ab04_3cedb6623ebf.slice. Dec 16 12:26:48.618228 systemd[1]: Created slice kubepods-besteffort-pod9544238e_7c32_4f71_bf15_98eec7d18a91.slice - libcontainer container kubepods-besteffort-pod9544238e_7c32_4f71_bf15_98eec7d18a91.slice. Dec 16 12:26:48.701629 kubelet[2731]: I1216 12:26:48.701571 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9544238e-7c32-4f71-bf15-98eec7d18a91-whisker-backend-key-pair\") pod \"whisker-585749bd7b-mzpmc\" (UID: \"9544238e-7c32-4f71-bf15-98eec7d18a91\") " pod="calico-system/whisker-585749bd7b-mzpmc" Dec 16 12:26:48.701629 kubelet[2731]: I1216 12:26:48.701638 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9544238e-7c32-4f71-bf15-98eec7d18a91-whisker-ca-bundle\") pod \"whisker-585749bd7b-mzpmc\" (UID: \"9544238e-7c32-4f71-bf15-98eec7d18a91\") " pod="calico-system/whisker-585749bd7b-mzpmc" Dec 16 12:26:48.701845 kubelet[2731]: I1216 12:26:48.701666 2731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqpv7\" (UniqueName: \"kubernetes.io/projected/9544238e-7c32-4f71-bf15-98eec7d18a91-kube-api-access-pqpv7\") pod \"whisker-585749bd7b-mzpmc\" (UID: \"9544238e-7c32-4f71-bf15-98eec7d18a91\") " pod="calico-system/whisker-585749bd7b-mzpmc" Dec 16 12:26:48.927180 containerd[1582]: time="2025-12-16T12:26:48.927130798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585749bd7b-mzpmc,Uid:9544238e-7c32-4f71-bf15-98eec7d18a91,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:49.042077 kubelet[2731]: I1216 12:26:49.042026 2731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a5d33b-e024-4409-ab04-3cedb6623ebf" path="/var/lib/kubelet/pods/e2a5d33b-e024-4409-ab04-3cedb6623ebf/volumes" Dec 16 12:26:49.168532 systemd-networkd[1494]: cali06a0d598268: Link UP Dec 16 12:26:49.168928 systemd-networkd[1494]: cali06a0d598268: Gained carrier Dec 16 12:26:49.188742 containerd[1582]: 2025-12-16 12:26:48.955 [INFO][3968] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:49.188742 containerd[1582]: 2025-12-16 12:26:48.999 [INFO][3968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--585749bd7b--mzpmc-eth0 whisker-585749bd7b- calico-system 9544238e-7c32-4f71-bf15-98eec7d18a91 976 0 2025-12-16 12:26:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:585749bd7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-585749bd7b-mzpmc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali06a0d598268 [] [] }} ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-" Dec 16 12:26:49.188742 containerd[1582]: 2025-12-16 12:26:49.000 [INFO][3968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.188742 containerd[1582]: 2025-12-16 12:26:49.103 [INFO][3978] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" HandleID="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Workload="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.103 [INFO][3978] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" HandleID="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Workload="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ea9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-585749bd7b-mzpmc", "timestamp":"2025-12-16 12:26:49.103786875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.104 [INFO][3978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.104 [INFO][3978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.104 [INFO][3978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.116 [INFO][3978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" host="localhost" Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.129 [INFO][3978] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.136 [INFO][3978] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.138 [INFO][3978] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.141 [INFO][3978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:49.188996 containerd[1582]: 2025-12-16 12:26:49.142 [INFO][3978] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" host="localhost" Dec 16 12:26:49.189196 containerd[1582]: 2025-12-16 12:26:49.144 [INFO][3978] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f Dec 16 12:26:49.189196 containerd[1582]: 2025-12-16 12:26:49.149 [INFO][3978] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" host="localhost" Dec 16 12:26:49.189196 containerd[1582]: 2025-12-16 12:26:49.156 [INFO][3978] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" host="localhost" Dec 16 12:26:49.189196 containerd[1582]: 2025-12-16 12:26:49.157 [INFO][3978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" host="localhost" Dec 16 12:26:49.189196 containerd[1582]: 2025-12-16 12:26:49.157 [INFO][3978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:49.189196 containerd[1582]: 2025-12-16 12:26:49.157 [INFO][3978] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" HandleID="k8s-pod-network.8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Workload="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.189919 containerd[1582]: 2025-12-16 12:26:49.160 [INFO][3968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--585749bd7b--mzpmc-eth0", GenerateName:"whisker-585749bd7b-", Namespace:"calico-system", SelfLink:"", UID:"9544238e-7c32-4f71-bf15-98eec7d18a91", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585749bd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-585749bd7b-mzpmc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06a0d598268", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:49.189919 containerd[1582]: 2025-12-16 12:26:49.160 [INFO][3968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.190009 containerd[1582]: 2025-12-16 12:26:49.160 [INFO][3968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06a0d598268 ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.190009 containerd[1582]: 2025-12-16 12:26:49.170 [INFO][3968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.190050 containerd[1582]: 2025-12-16 12:26:49.171 [INFO][3968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--585749bd7b--mzpmc-eth0", GenerateName:"whisker-585749bd7b-", Namespace:"calico-system", SelfLink:"", UID:"9544238e-7c32-4f71-bf15-98eec7d18a91", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585749bd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f", Pod:"whisker-585749bd7b-mzpmc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06a0d598268", MAC:"9a:7b:6a:ec:df:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:49.190093 containerd[1582]: 2025-12-16 12:26:49.184 [INFO][3968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" Namespace="calico-system" Pod="whisker-585749bd7b-mzpmc" WorkloadEndpoint="localhost-k8s-whisker--585749bd7b--mzpmc-eth0" Dec 16 12:26:49.209255 kubelet[2731]: I1216 12:26:49.209207 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:49.209825 kubelet[2731]: E1216 12:26:49.209802 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:49.493278 containerd[1582]: time="2025-12-16T12:26:49.493133464Z" level=info msg="connecting to shim 8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f" address="unix:///run/containerd/s/c2ee1f53c857ecaf253c6cc55b302c0de4f991599216237e07f896065a691e6e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:49.542755 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:60040.service - OpenSSH per-connection server daemon (10.0.0.1:60040). Dec 16 12:26:49.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.45:22-10.0.0.1:60040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:49.561972 systemd[1]: Started cri-containerd-8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f.scope - libcontainer container 8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f. Dec 16 12:26:49.587000 audit: BPF prog-id=178 op=LOAD Dec 16 12:26:49.588000 audit: BPF prog-id=179 op=LOAD Dec 16 12:26:49.588000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.588000 audit: BPF prog-id=179 op=UNLOAD Dec 16 12:26:49.588000 audit[4109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.590000 audit: BPF prog-id=180 op=LOAD Dec 16 12:26:49.590000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.590000 audit: BPF prog-id=181 op=LOAD Dec 16 12:26:49.590000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.590000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:26:49.590000 audit[4109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.590000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:26:49.590000 audit[4109]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.590000 audit: BPF prog-id=182 op=LOAD Dec 16 12:26:49.590000 audit[4109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4090 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861336430346264313963393131663734396266616239373636643361 Dec 16 12:26:49.631278 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:49.649000 audit[4122]: USER_ACCT pid=4122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:49.651609 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 60040 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:26:49.652000 audit[4122]: CRED_ACQ pid=4122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:49.652000 audit[4122]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4d14a10 a2=3 a3=0 items=0 ppid=1 pid=4122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:49.652000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:26:49.654881 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:49.664652 systemd-logind[1553]: New session 8 of user core. Dec 16 12:26:49.672587 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:26:49.675000 audit[4122]: USER_START pid=4122 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:49.679000 audit[4135]: CRED_ACQ pid=4135 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:49.776617 containerd[1582]: time="2025-12-16T12:26:49.776470928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585749bd7b-mzpmc,Uid:9544238e-7c32-4f71-bf15-98eec7d18a91,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a3d04bd19c911f749bfab9766d3aba02b62bb518fe548a4d57bcc0e20b4886f\"" Dec 16 12:26:49.779070 containerd[1582]: time="2025-12-16T12:26:49.779035579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:26:49.859235 sshd[4135]: Connection closed by 10.0.0.1 port 60040 Dec 16 12:26:49.859920 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:49.860000 audit[4122]: USER_END pid=4122 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:49.860000 audit[4122]: CRED_DISP pid=4122 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:49.864918 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:60040.service: Deactivated successfully. Dec 16 12:26:49.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.45:22-10.0.0.1:60040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:49.867445 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:26:49.868693 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:26:49.870745 systemd-logind[1553]: Removed session 8. Dec 16 12:26:50.071116 containerd[1582]: time="2025-12-16T12:26:50.070835606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:50.074146 containerd[1582]: time="2025-12-16T12:26:50.074004867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:26:50.074146 containerd[1582]: time="2025-12-16T12:26:50.074020349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:50.074356 kubelet[2731]: E1216 12:26:50.074295 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:26:50.077892 kubelet[2731]: E1216 12:26:50.077825 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:26:50.079130 kubelet[2731]: E1216 12:26:50.079057 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-585749bd7b-mzpmc_calico-system(9544238e-7c32-4f71-bf15-98eec7d18a91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:50.080306 containerd[1582]: time="2025-12-16T12:26:50.080266983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:26:50.362575 containerd[1582]: time="2025-12-16T12:26:50.362518552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:50.363708 containerd[1582]: time="2025-12-16T12:26:50.363634818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:26:50.363783 containerd[1582]: time="2025-12-16T12:26:50.363744788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:50.363998 kubelet[2731]: E1216 12:26:50.363944 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:26:50.364302 kubelet[2731]: E1216 12:26:50.363996 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:26:50.364302 kubelet[2731]: E1216 12:26:50.364085 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-585749bd7b-mzpmc_calico-system(9544238e-7c32-4f71-bf15-98eec7d18a91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:50.364302 kubelet[2731]: E1216 12:26:50.364126 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585749bd7b-mzpmc" podUID="9544238e-7c32-4f71-bf15-98eec7d18a91" Dec 16 12:26:51.075512 systemd-networkd[1494]: cali06a0d598268: Gained IPv6LL Dec 16 12:26:51.220889 kubelet[2731]: E1216 12:26:51.220784 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585749bd7b-mzpmc" podUID="9544238e-7c32-4f71-bf15-98eec7d18a91" Dec 16 12:26:51.277000 audit[4185]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:51.277000 audit[4185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffff92afe0 a2=0 a3=1 items=0 ppid=2851 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:51.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:51.290000 audit[4185]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:51.290000 audit[4185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff92afe0 a2=0 a3=1 items=0 ppid=2851 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:51.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:54.406443 kubelet[2731]: I1216 12:26:54.406385 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:54.406901 kubelet[2731]: E1216 12:26:54.406855 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:54.876167 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:39114.service - OpenSSH per-connection server daemon (10.0.0.1:39114). Dec 16 12:26:54.877388 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 16 12:26:54.877437 kernel: audit: type=1130 audit(1765888014.874:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.45:22-10.0.0.1:39114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:54.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.45:22-10.0.0.1:39114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:54.942000 audit[4312]: USER_ACCT pid=4312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.943927 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 39114 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:26:54.947366 kernel: audit: type=1101 audit(1765888014.942:594): pid=4312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.946000 audit[4312]: CRED_ACQ pid=4312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.948294 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:54.952638 kernel: audit: type=1103 audit(1765888014.946:595): pid=4312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.952747 kernel: audit: type=1006 audit(1765888014.946:596): pid=4312 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 12:26:54.952777 kernel: audit: type=1300 audit(1765888014.946:596): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9673950 a2=3 a3=0 items=0 ppid=1 pid=4312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:54.946000 audit[4312]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9673950 a2=3 a3=0 items=0 ppid=1 pid=4312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:54.956402 kernel: audit: type=1327 audit(1765888014.946:596): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:26:54.946000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:26:54.959784 systemd-logind[1553]: New session 9 of user core. Dec 16 12:26:54.968605 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:26:54.970000 audit[4312]: USER_START pid=4312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.974000 audit[4315]: CRED_ACQ pid=4315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.978935 kernel: audit: type=1105 audit(1765888014.970:597): pid=4312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:54.979035 kernel: audit: type=1103 audit(1765888014.974:598): pid=4315 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:55.033785 containerd[1582]: time="2025-12-16T12:26:55.033570895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cbqqb,Uid:3e707694-c3c3-46cb-9c34-819568db9981,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:55.078398 sshd[4315]: Connection closed by 10.0.0.1 port 39114 Dec 16 12:26:55.078898 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:55.080000 audit[4312]: USER_END pid=4312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:55.080000 audit[4312]: CRED_DISP pid=4312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:55.086714 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:39114.service: Deactivated successfully. Dec 16 12:26:55.087976 kernel: audit: type=1106 audit(1765888015.080:599): pid=4312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:55.088072 kernel: audit: type=1104 audit(1765888015.080:600): pid=4312 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:26:55.088531 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:26:55.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.45:22-10.0.0.1:39114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:26:55.092574 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:26:55.099466 systemd-logind[1553]: Removed session 9. Dec 16 12:26:55.244039 systemd-networkd[1494]: cali55d08cec961: Link UP Dec 16 12:26:55.244855 systemd-networkd[1494]: cali55d08cec961: Gained carrier Dec 16 12:26:55.261264 containerd[1582]: 2025-12-16 12:26:55.075 [INFO][4333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:55.261264 containerd[1582]: 2025-12-16 12:26:55.101 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--cbqqb-eth0 goldmane-7c778bb748- calico-system 3e707694-c3c3-46cb-9c34-819568db9981 911 0 2025-12-16 12:26:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-cbqqb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali55d08cec961 [] [] }} ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-" Dec 16 12:26:55.261264 containerd[1582]: 2025-12-16 12:26:55.101 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.261264 containerd[1582]: 2025-12-16 12:26:55.153 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" HandleID="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Workload="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.154 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" HandleID="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Workload="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cfd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-cbqqb", "timestamp":"2025-12-16 12:26:55.15392129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.154 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.154 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.154 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.168 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" host="localhost" Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.181 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.189 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.195 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.198 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:55.261576 containerd[1582]: 2025-12-16 12:26:55.198 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" host="localhost" Dec 16 12:26:55.261777 containerd[1582]: 2025-12-16 12:26:55.201 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc Dec 16 12:26:55.261777 containerd[1582]: 2025-12-16 12:26:55.229 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" host="localhost" Dec 16 12:26:55.261777 containerd[1582]: 2025-12-16 12:26:55.238 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" host="localhost" Dec 16 12:26:55.261777 containerd[1582]: 2025-12-16 12:26:55.238 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" host="localhost" Dec 16 12:26:55.261777 containerd[1582]: 2025-12-16 12:26:55.238 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:55.261777 containerd[1582]: 2025-12-16 12:26:55.239 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" HandleID="k8s-pod-network.b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Workload="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.261912 containerd[1582]: 2025-12-16 12:26:55.241 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--cbqqb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3e707694-c3c3-46cb-9c34-819568db9981", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-cbqqb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali55d08cec961", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:55.261912 containerd[1582]: 2025-12-16 12:26:55.242 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.261984 containerd[1582]: 2025-12-16 12:26:55.242 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55d08cec961 ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.261984 containerd[1582]: 2025-12-16 12:26:55.244 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.262025 containerd[1582]: 2025-12-16 12:26:55.245 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--cbqqb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3e707694-c3c3-46cb-9c34-819568db9981", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc", Pod:"goldmane-7c778bb748-cbqqb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali55d08cec961", MAC:"7a:17:fc:39:f2:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:55.262106 containerd[1582]: 2025-12-16 12:26:55.258 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" Namespace="calico-system" Pod="goldmane-7c778bb748-cbqqb" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cbqqb-eth0" Dec 16 12:26:55.292156 containerd[1582]: time="2025-12-16T12:26:55.292091925Z" level=info msg="connecting to shim b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc" address="unix:///run/containerd/s/76358f353fcca1d741734e91536a4c34295a64094c0acef6b8f606100fa1212a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:55.322584 systemd[1]: Started cri-containerd-b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc.scope - libcontainer container b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc. Dec 16 12:26:55.332000 audit: BPF prog-id=183 op=LOAD Dec 16 12:26:55.332000 audit: BPF prog-id=184 op=LOAD Dec 16 12:26:55.332000 audit[4401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.333000 audit: BPF prog-id=184 op=UNLOAD Dec 16 12:26:55.333000 audit[4401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.333000 audit: BPF prog-id=185 op=LOAD Dec 16 12:26:55.333000 audit[4401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.333000 audit: BPF prog-id=186 op=LOAD Dec 16 12:26:55.333000 audit[4401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.333000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:26:55.333000 audit[4401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.333000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:26:55.333000 audit[4401]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.333000 audit: BPF prog-id=187 op=LOAD Dec 16 12:26:55.333000 audit[4401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4390 pid=4401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:55.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230306563323439353239646637646134623738333264323732663131 Dec 16 12:26:55.335355 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:55.377349 containerd[1582]: time="2025-12-16T12:26:55.377277559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cbqqb,Uid:3e707694-c3c3-46cb-9c34-819568db9981,Namespace:calico-system,Attempt:0,} returns sandbox id \"b00ec249529df7da4b7832d272f11b37b7849fd89a063c406a190562a0c5c3bc\"" Dec 16 12:26:55.379493 containerd[1582]: time="2025-12-16T12:26:55.379453740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:26:55.574496 containerd[1582]: time="2025-12-16T12:26:55.574269159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:55.589849 containerd[1582]: time="2025-12-16T12:26:55.589632675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:26:55.589849 containerd[1582]: time="2025-12-16T12:26:55.589674958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:55.590394 kubelet[2731]: E1216 12:26:55.590157 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:26:55.591840 kubelet[2731]: E1216 12:26:55.591438 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:26:55.591840 kubelet[2731]: E1216 12:26:55.591549 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cbqqb_calico-system(3e707694-c3c3-46cb-9c34-819568db9981): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:55.591840 kubelet[2731]: E1216 12:26:55.591584 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:26:56.033510 containerd[1582]: time="2025-12-16T12:26:56.033463708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686cd7448d-zpgvc,Uid:cb75681b-286c-4a6b-a7e3-6df9c7f59d30,Namespace:calico-system,Attempt:0,}" Dec 16 12:26:56.036560 containerd[1582]: time="2025-12-16T12:26:56.036480751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-xt2gx,Uid:d237c895-0922-4d34-99eb-c1d5a8780e41,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:56.039944 kubelet[2731]: E1216 12:26:56.039907 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:56.040677 containerd[1582]: time="2025-12-16T12:26:56.040547720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z6rv6,Uid:cf1efc62-d121-4b86-8964-6178a6812710,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:56.244054 kubelet[2731]: E1216 12:26:56.244007 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:26:56.245673 systemd-networkd[1494]: cali3f96b60eb6f: Link UP Dec 16 12:26:56.246384 systemd-networkd[1494]: cali3f96b60eb6f: Gained carrier Dec 16 12:26:56.262380 containerd[1582]: 2025-12-16 12:26:56.094 [INFO][4439] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:56.262380 containerd[1582]: 2025-12-16 12:26:56.113 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--z6rv6-eth0 coredns-66bc5c9577- kube-system cf1efc62-d121-4b86-8964-6178a6812710 907 0 2025-12-16 12:26:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-z6rv6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f96b60eb6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-" Dec 16 12:26:56.262380 containerd[1582]: 2025-12-16 12:26:56.113 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.262380 containerd[1582]: 2025-12-16 12:26:56.164 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" HandleID="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Workload="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.164 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" HandleID="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Workload="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-z6rv6", "timestamp":"2025-12-16 12:26:56.164058827 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.164 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.164 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.164 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.181 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" host="localhost" Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.188 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.194 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.197 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.202 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:56.262625 containerd[1582]: 2025-12-16 12:26:56.202 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" host="localhost" Dec 16 12:26:56.262842 containerd[1582]: 2025-12-16 12:26:56.206 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341 Dec 16 12:26:56.262842 containerd[1582]: 2025-12-16 12:26:56.217 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" host="localhost" Dec 16 12:26:56.262842 containerd[1582]: 2025-12-16 12:26:56.228 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" host="localhost" Dec 16 12:26:56.262842 containerd[1582]: 2025-12-16 12:26:56.228 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" host="localhost" Dec 16 12:26:56.262842 containerd[1582]: 2025-12-16 12:26:56.228 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:56.262842 containerd[1582]: 2025-12-16 12:26:56.228 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" HandleID="k8s-pod-network.f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Workload="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.262961 containerd[1582]: 2025-12-16 12:26:56.236 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--z6rv6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cf1efc62-d121-4b86-8964-6178a6812710", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-z6rv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f96b60eb6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:56.262961 containerd[1582]: 2025-12-16 12:26:56.237 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.262961 containerd[1582]: 2025-12-16 12:26:56.239 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f96b60eb6f ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.262961 containerd[1582]: 2025-12-16 12:26:56.246 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.262961 containerd[1582]: 2025-12-16 12:26:56.246 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--z6rv6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cf1efc62-d121-4b86-8964-6178a6812710", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341", Pod:"coredns-66bc5c9577-z6rv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f96b60eb6f", MAC:"3e:ac:e4:42:e4:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:56.262961 containerd[1582]: 2025-12-16 12:26:56.256 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" Namespace="kube-system" Pod="coredns-66bc5c9577-z6rv6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--z6rv6-eth0" Dec 16 12:26:56.268000 audit[4526]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:56.268000 audit[4526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff3e7f1b0 a2=0 a3=1 items=0 ppid=2851 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:56.277000 audit[4526]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:56.277000 audit[4526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff3e7f1b0 a2=0 a3=1 items=0 ppid=2851 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:56.313057 containerd[1582]: time="2025-12-16T12:26:56.312240648Z" level=info msg="connecting to shim f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341" address="unix:///run/containerd/s/216bff56a7c0481b61ad7a816df2d5d4baa528263a5039fdf07012298529455f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:56.343084 systemd[1]: Started cri-containerd-f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341.scope - libcontainer container f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341. Dec 16 12:26:56.360106 systemd-networkd[1494]: calia111e775334: Link UP Dec 16 12:26:56.362951 kubelet[2731]: I1216 12:26:56.361870 2731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:26:56.362951 kubelet[2731]: E1216 12:26:56.362229 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:56.363450 systemd-networkd[1494]: calia111e775334: Gained carrier Dec 16 12:26:56.373000 audit: BPF prog-id=188 op=LOAD Dec 16 12:26:56.376000 audit: BPF prog-id=189 op=LOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.376000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.376000 audit: BPF prog-id=190 op=LOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.376000 audit: BPF prog-id=191 op=LOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.376000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.376000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.376000 audit: BPF prog-id=192 op=LOAD Dec 16 12:26:56.376000 audit[4548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4536 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635376638323862643438663163306163633361656232313266626330 Dec 16 12:26:56.379966 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.097 [INFO][4429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.128 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0 calico-kube-controllers-686cd7448d- calico-system cb75681b-286c-4a6b-a7e3-6df9c7f59d30 914 0 2025-12-16 12:26:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:686cd7448d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-686cd7448d-zpgvc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia111e775334 [] [] }} ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.129 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.170 [INFO][4482] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" HandleID="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Workload="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.170 [INFO][4482] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" HandleID="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Workload="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-686cd7448d-zpgvc", "timestamp":"2025-12-16 12:26:56.17053735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.170 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.228 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.228 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.282 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.290 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.303 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.308 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.319 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.320 [INFO][4482] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.327 [INFO][4482] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7 Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.334 [INFO][4482] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.350 [INFO][4482] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.351 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" host="localhost" Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.351 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:56.394548 containerd[1582]: 2025-12-16 12:26:56.351 [INFO][4482] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" HandleID="k8s-pod-network.b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Workload="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.395107 containerd[1582]: 2025-12-16 12:26:56.356 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0", GenerateName:"calico-kube-controllers-686cd7448d-", Namespace:"calico-system", SelfLink:"", UID:"cb75681b-286c-4a6b-a7e3-6df9c7f59d30", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686cd7448d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-686cd7448d-zpgvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia111e775334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:56.395107 containerd[1582]: 2025-12-16 12:26:56.356 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.395107 containerd[1582]: 2025-12-16 12:26:56.356 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia111e775334 ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.395107 containerd[1582]: 2025-12-16 12:26:56.363 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.395107 containerd[1582]: 2025-12-16 12:26:56.364 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0", GenerateName:"calico-kube-controllers-686cd7448d-", Namespace:"calico-system", SelfLink:"", UID:"cb75681b-286c-4a6b-a7e3-6df9c7f59d30", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686cd7448d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7", Pod:"calico-kube-controllers-686cd7448d-zpgvc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia111e775334", MAC:"b6:ab:84:d9:48:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:56.395107 containerd[1582]: 2025-12-16 12:26:56.386 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" Namespace="calico-system" Pod="calico-kube-controllers-686cd7448d-zpgvc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686cd7448d--zpgvc-eth0" Dec 16 12:26:56.434464 containerd[1582]: time="2025-12-16T12:26:56.434413886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-z6rv6,Uid:cf1efc62-d121-4b86-8964-6178a6812710,Namespace:kube-system,Attempt:0,} returns sandbox id \"f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341\"" Dec 16 12:26:56.436527 kubelet[2731]: E1216 12:26:56.436498 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:56.443044 containerd[1582]: time="2025-12-16T12:26:56.442690795Z" level=info msg="connecting to shim b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7" address="unix:///run/containerd/s/3b5e497783dae182753a01db679b45e7b61dd06a8b53c193bafc4d89185ebb7c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:56.445691 containerd[1582]: time="2025-12-16T12:26:56.445417095Z" level=info msg="CreateContainer within sandbox \"f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:26:56.467570 containerd[1582]: time="2025-12-16T12:26:56.467530083Z" level=info msg="Container e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:56.469242 systemd-networkd[1494]: calicc44519f35b: Link UP Dec 16 12:26:56.469425 systemd-networkd[1494]: calicc44519f35b: Gained carrier Dec 16 12:26:56.482830 systemd[1]: Started cri-containerd-b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7.scope - libcontainer container b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7. Dec 16 12:26:56.488973 containerd[1582]: time="2025-12-16T12:26:56.488923413Z" level=info msg="CreateContainer within sandbox \"f57f828bd48f1c0acc3aeb212fbc08399d36393ad8cc83532c0ef6c3ef59d341\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8\"" Dec 16 12:26:56.491141 containerd[1582]: time="2025-12-16T12:26:56.491058826Z" level=info msg="StartContainer for \"e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8\"" Dec 16 12:26:56.492134 containerd[1582]: time="2025-12-16T12:26:56.492102590Z" level=info msg="connecting to shim e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8" address="unix:///run/containerd/s/216bff56a7c0481b61ad7a816df2d5d4baa528263a5039fdf07012298529455f" protocol=ttrpc version=3 Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.113 [INFO][4448] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.130 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0 calico-apiserver-86f4fcfc8d- calico-apiserver d237c895-0922-4d34-99eb-c1d5a8780e41 910 0 2025-12-16 12:26:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f4fcfc8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86f4fcfc8d-xt2gx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc44519f35b [] [] }} ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.131 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.183 [INFO][4486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" HandleID="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Workload="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.183 [INFO][4486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" HandleID="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Workload="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86f4fcfc8d-xt2gx", "timestamp":"2025-12-16 12:26:56.183423032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.183 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.351 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.352 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.382 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.396 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.414 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.421 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.429 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.429 [INFO][4486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.432 [INFO][4486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2 Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.440 [INFO][4486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.459 [INFO][4486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.459 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" host="localhost" Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.460 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:56.492766 containerd[1582]: 2025-12-16 12:26:56.460 [INFO][4486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" HandleID="k8s-pod-network.72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Workload="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.493944 containerd[1582]: 2025-12-16 12:26:56.464 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0", GenerateName:"calico-apiserver-86f4fcfc8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d237c895-0922-4d34-99eb-c1d5a8780e41", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fcfc8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86f4fcfc8d-xt2gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc44519f35b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:56.493944 containerd[1582]: 2025-12-16 12:26:56.464 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.493944 containerd[1582]: 2025-12-16 12:26:56.464 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc44519f35b ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.493944 containerd[1582]: 2025-12-16 12:26:56.468 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.493944 containerd[1582]: 2025-12-16 12:26:56.468 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0", GenerateName:"calico-apiserver-86f4fcfc8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d237c895-0922-4d34-99eb-c1d5a8780e41", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fcfc8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2", Pod:"calico-apiserver-86f4fcfc8d-xt2gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc44519f35b", MAC:"02:d4:cd:99:70:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:56.493944 containerd[1582]: 2025-12-16 12:26:56.486 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-xt2gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--xt2gx-eth0" Dec 16 12:26:56.501000 audit: BPF prog-id=193 op=LOAD Dec 16 12:26:56.502000 audit: BPF prog-id=194 op=LOAD Dec 16 12:26:56.502000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.502000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:26:56.502000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.502000 audit: BPF prog-id=195 op=LOAD Dec 16 12:26:56.502000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.503000 audit: BPF prog-id=196 op=LOAD Dec 16 12:26:56.503000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.503000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:26:56.503000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.503000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:26:56.503000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.503000 audit: BPF prog-id=197 op=LOAD Dec 16 12:26:56.503000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233346433313239366265323632303436316535303131646662666437 Dec 16 12:26:56.506176 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:56.520051 systemd[1]: Started cri-containerd-e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8.scope - libcontainer container e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8. Dec 16 12:26:56.533087 containerd[1582]: time="2025-12-16T12:26:56.533030779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686cd7448d-zpgvc,Uid:cb75681b-286c-4a6b-a7e3-6df9c7f59d30,Namespace:calico-system,Attempt:0,} returns sandbox id \"b34d31296be2620461e5011dfbfd7b3de3e9fde3a6c3829ab364807c7dbe60f7\"" Dec 16 12:26:56.537455 containerd[1582]: time="2025-12-16T12:26:56.537155473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:26:56.538000 audit: BPF prog-id=198 op=LOAD Dec 16 12:26:56.539000 audit: BPF prog-id=199 op=LOAD Dec 16 12:26:56.539000 audit[4627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.540000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:26:56.540000 audit[4627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.540000 audit: BPF prog-id=200 op=LOAD Dec 16 12:26:56.540000 audit[4627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.540000 audit: BPF prog-id=201 op=LOAD Dec 16 12:26:56.540000 audit[4627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.540000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:26:56.540000 audit[4627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.540000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:26:56.540000 audit[4627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.540000 audit: BPF prog-id=202 op=LOAD Dec 16 12:26:56.540000 audit[4627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4536 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535653638623635396338303930333130323531323066656131316530 Dec 16 12:26:56.545274 containerd[1582]: time="2025-12-16T12:26:56.545226645Z" level=info msg="connecting to shim 72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2" address="unix:///run/containerd/s/0f41c59c734488aa50990f607bf5a213f47ab0877fa4b1dddd0fa11873561dcf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:56.562696 containerd[1582]: time="2025-12-16T12:26:56.562648694Z" level=info msg="StartContainer for \"e5e68b659c809031025120fea11e0e2b21f3af1ffebede678402ee0ba1b8f6b8\" returns successfully" Dec 16 12:26:56.576636 systemd[1]: Started cri-containerd-72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2.scope - libcontainer container 72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2. Dec 16 12:26:56.591000 audit: BPF prog-id=203 op=LOAD Dec 16 12:26:56.592000 audit: BPF prog-id=204 op=LOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.592000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.592000 audit: BPF prog-id=205 op=LOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.592000 audit: BPF prog-id=206 op=LOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.592000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.592000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.592000 audit: BPF prog-id=207 op=LOAD Dec 16 12:26:56.592000 audit[4673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4660 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:56.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643531356165353839383435313131316332313061343765653832 Dec 16 12:26:56.595212 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:56.627922 containerd[1582]: time="2025-12-16T12:26:56.627835885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-xt2gx,Uid:d237c895-0922-4d34-99eb-c1d5a8780e41,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"72d515ae5898451111c210a47ee82f30c4ae82a6ff983c85bcd26b1610f614b2\"" Dec 16 12:26:56.769341 containerd[1582]: time="2025-12-16T12:26:56.769261279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:56.848975 containerd[1582]: time="2025-12-16T12:26:56.848920880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:26:56.849151 containerd[1582]: time="2025-12-16T12:26:56.848962044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:56.849239 kubelet[2731]: E1216 12:26:56.849199 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:56.850276 kubelet[2731]: E1216 12:26:56.849251 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:26:56.850276 kubelet[2731]: E1216 12:26:56.849422 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-686cd7448d-zpgvc_calico-system(cb75681b-286c-4a6b-a7e3-6df9c7f59d30): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:56.850276 kubelet[2731]: E1216 12:26:56.849463 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:26:56.850406 containerd[1582]: time="2025-12-16T12:26:56.849702303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:57.079779 containerd[1582]: time="2025-12-16T12:26:57.079718094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:57.081900 containerd[1582]: time="2025-12-16T12:26:57.081777457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:57.081900 containerd[1582]: time="2025-12-16T12:26:57.081841382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:57.082162 kubelet[2731]: E1216 12:26:57.082128 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:57.082255 kubelet[2731]: E1216 12:26:57.082240 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:57.082410 kubelet[2731]: E1216 12:26:57.082390 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-86f4fcfc8d-xt2gx_calico-apiserver(d237c895-0922-4d34-99eb-c1d5a8780e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:57.082521 kubelet[2731]: E1216 12:26:57.082501 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" podUID="d237c895-0922-4d34-99eb-c1d5a8780e41" Dec 16 12:26:57.118000 audit: BPF prog-id=208 op=LOAD Dec 16 12:26:57.118000 audit[4730]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2a8aea8 a2=98 a3=ffffc2a8ae98 items=0 ppid=4712 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.118000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:57.118000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:26:57.118000 audit[4730]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc2a8ae78 a3=0 items=0 ppid=4712 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.118000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:57.118000 audit: BPF prog-id=209 op=LOAD Dec 16 12:26:57.118000 audit[4730]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2a8ad58 a2=74 a3=95 items=0 ppid=4712 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.118000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:57.118000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:26:57.118000 audit[4730]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4712 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.118000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:57.118000 audit: BPF prog-id=210 op=LOAD Dec 16 12:26:57.118000 audit[4730]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2a8ad88 a2=40 a3=ffffc2a8adb8 items=0 ppid=4712 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.118000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:57.118000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:26:57.118000 audit[4730]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc2a8adb8 items=0 ppid=4712 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.118000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:26:57.120000 audit: BPF prog-id=211 op=LOAD Dec 16 12:26:57.120000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5eff718 a2=98 a3=ffffd5eff708 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.120000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:26:57.120000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd5eff6e8 a3=0 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.120000 audit: BPF prog-id=212 op=LOAD Dec 16 12:26:57.120000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5eff3a8 a2=74 a3=95 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.120000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:26:57.120000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.120000 audit: BPF prog-id=213 op=LOAD Dec 16 12:26:57.120000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5eff408 a2=94 a3=2 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.120000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:26:57.120000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.218000 audit: BPF prog-id=214 op=LOAD Dec 16 12:26:57.218000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5eff3c8 a2=40 a3=ffffd5eff3f8 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.218000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:26:57.218000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd5eff3f8 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.218000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.228000 audit: BPF prog-id=215 op=LOAD Dec 16 12:26:57.228000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5eff3d8 a2=94 a3=4 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.228000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:26:57.228000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.228000 audit: BPF prog-id=216 op=LOAD Dec 16 12:26:57.228000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5eff218 a2=94 a3=5 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.228000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:26:57.228000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.228000 audit: BPF prog-id=217 op=LOAD Dec 16 12:26:57.228000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5eff448 a2=94 a3=6 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.228000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.229000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:26:57.229000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.229000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.229000 audit: BPF prog-id=218 op=LOAD Dec 16 12:26:57.229000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5efec18 a2=94 a3=83 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.229000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.229000 audit: BPF prog-id=219 op=LOAD Dec 16 12:26:57.229000 audit[4731]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd5efe9d8 a2=94 a3=2 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.229000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.229000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:26:57.229000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.229000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.230000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:26:57.230000 audit[4731]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=32a02620 a3=329f5b00 items=0 ppid=4712 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.230000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:26:57.242000 audit: BPF prog-id=220 op=LOAD Dec 16 12:26:57.242000 audit[4734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdceb27e8 a2=98 a3=ffffdceb27d8 items=0 ppid=4712 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:57.242000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:26:57.242000 audit[4734]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdceb27b8 a3=0 items=0 ppid=4712 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:57.242000 audit: BPF prog-id=221 op=LOAD Dec 16 12:26:57.242000 audit[4734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdceb2698 a2=74 a3=95 items=0 ppid=4712 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:57.242000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:26:57.242000 audit[4734]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4712 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:57.243000 audit: BPF prog-id=222 op=LOAD Dec 16 12:26:57.243000 audit[4734]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdceb26c8 a2=40 a3=ffffdceb26f8 items=0 ppid=4712 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.243000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:57.243000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:26:57.243000 audit[4734]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdceb26f8 items=0 ppid=4712 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.243000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:26:57.245224 kubelet[2731]: E1216 12:26:57.245168 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:57.250344 kubelet[2731]: E1216 12:26:57.249851 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" podUID="d237c895-0922-4d34-99eb-c1d5a8780e41" Dec 16 12:26:57.251670 kubelet[2731]: E1216 12:26:57.251568 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:57.252744 kubelet[2731]: E1216 12:26:57.252653 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:26:57.252951 kubelet[2731]: E1216 12:26:57.252834 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:26:57.285429 systemd-networkd[1494]: cali55d08cec961: Gained IPv6LL Dec 16 12:26:57.306000 audit[4755]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:57.306000 audit[4755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd1d785e0 a2=0 a3=1 items=0 ppid=2851 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:57.315000 audit[4755]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:57.315000 audit[4755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd1d785e0 a2=0 a3=1 items=0 ppid=2851 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:57.319547 kubelet[2731]: I1216 12:26:57.318644 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-z6rv6" podStartSLOduration=41.318617863 podStartE2EDuration="41.318617863s" podCreationTimestamp="2025-12-16 12:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:26:57.299087205 +0000 UTC m=+48.388072078" watchObservedRunningTime="2025-12-16 12:26:57.318617863 +0000 UTC m=+48.407602776" Dec 16 12:26:57.368179 systemd-networkd[1494]: vxlan.calico: Link UP Dec 16 12:26:57.368188 systemd-networkd[1494]: vxlan.calico: Gained carrier Dec 16 12:26:57.402000 audit: BPF prog-id=223 op=LOAD Dec 16 12:26:57.402000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff90b8998 a2=98 a3=fffff90b8988 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.402000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.402000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:26:57.402000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff90b8968 a3=0 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.402000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=224 op=LOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff90b8678 a2=74 a3=95 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=225 op=LOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff90b86d8 a2=94 a3=2 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=226 op=LOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff90b8558 a2=40 a3=fffff90b8588 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff90b8588 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=227 op=LOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff90b86a8 a2=94 a3=b7 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=228 op=LOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff90b7d58 a2=94 a3=2 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.403000 audit: BPF prog-id=229 op=LOAD Dec 16 12:26:57.403000 audit[4784]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff90b7ee8 a2=94 a3=30 items=0 ppid=4712 pid=4784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:26:57.408000 audit: BPF prog-id=230 op=LOAD Dec 16 12:26:57.408000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef86cd88 a2=98 a3=ffffef86cd78 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.408000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:26:57.408000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffef86cd58 a3=0 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.408000 audit: BPF prog-id=231 op=LOAD Dec 16 12:26:57.408000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef86ca18 a2=74 a3=95 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.408000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:26:57.408000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.408000 audit: BPF prog-id=232 op=LOAD Dec 16 12:26:57.408000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef86ca78 a2=94 a3=2 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.408000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:26:57.408000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.535000 audit: BPF prog-id=233 op=LOAD Dec 16 12:26:57.535000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef86ca38 a2=40 a3=ffffef86ca68 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.535000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.536000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:26:57.536000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffef86ca68 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.536000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.546000 audit: BPF prog-id=234 op=LOAD Dec 16 12:26:57.546000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef86ca48 a2=94 a3=4 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.546000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.546000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:26:57.546000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.546000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.546000 audit: BPF prog-id=235 op=LOAD Dec 16 12:26:57.546000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef86c888 a2=94 a3=5 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.546000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.547000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:26:57.547000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.547000 audit: BPF prog-id=236 op=LOAD Dec 16 12:26:57.547000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef86cab8 a2=94 a3=6 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.547000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:26:57.547000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.548000 audit: BPF prog-id=237 op=LOAD Dec 16 12:26:57.548000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef86c288 a2=94 a3=83 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.548000 audit: BPF prog-id=238 op=LOAD Dec 16 12:26:57.548000 audit[4787]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffef86c048 a2=94 a3=2 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.548000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:26:57.548000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.548000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:26:57.548000 audit[4787]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=84ed620 a3=84e0b00 items=0 ppid=4712 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:26:57.561000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:26:57.561000 audit[4712]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000e10ac0 a2=0 a3=0 items=0 ppid=3997 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.561000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:26:57.626000 audit[4841]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:57.626000 audit[4841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd1281390 a2=0 a3=ffff93a1cfa8 items=0 ppid=4712 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.626000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:57.629000 audit[4843]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4843 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:57.629000 audit[4843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc2b65dc0 a2=0 a3=ffffb704ffa8 items=0 ppid=4712 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.629000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:57.630000 audit[4845]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4845 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:57.630000 audit[4845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff352a990 a2=0 a3=ffffa872cfa8 items=0 ppid=4712 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.630000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:57.636000 audit[4842]: NETFILTER_CFG table=filter:126 family=2 entries=234 op=nft_register_chain pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:57.636000 audit[4842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=137032 a0=3 a1=ffffddd3b570 a2=0 a3=ffffa2bf8fa8 items=0 ppid=4712 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:57.636000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:58.031405 containerd[1582]: time="2025-12-16T12:26:58.031190621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7bcbfb5-5mqzg,Uid:e774688f-af9f-49ba-93f6-0e9b13337ee0,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:58.052895 systemd-networkd[1494]: cali3f96b60eb6f: Gained IPv6LL Dec 16 12:26:58.117464 systemd-networkd[1494]: calicc44519f35b: Gained IPv6LL Dec 16 12:26:58.160316 systemd-networkd[1494]: califf317edbed5: Link UP Dec 16 12:26:58.160917 systemd-networkd[1494]: califf317edbed5: Gained carrier Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.076 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0 calico-apiserver-6b7bcbfb5- calico-apiserver e774688f-af9f-49ba-93f6-0e9b13337ee0 916 0 2025-12-16 12:26:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7bcbfb5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b7bcbfb5-5mqzg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf317edbed5 [] [] }} ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.077 [INFO][4853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.113 [INFO][4868] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" HandleID="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Workload="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.113 [INFO][4868] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" HandleID="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Workload="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003226c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b7bcbfb5-5mqzg", "timestamp":"2025-12-16 12:26:58.113765392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.114 [INFO][4868] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.114 [INFO][4868] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.114 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.123 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.129 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.134 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.136 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.139 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.139 [INFO][4868] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.141 [INFO][4868] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.146 [INFO][4868] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.155 [INFO][4868] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.155 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" host="localhost" Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.155 [INFO][4868] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:58.175785 containerd[1582]: 2025-12-16 12:26:58.155 [INFO][4868] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" HandleID="k8s-pod-network.d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Workload="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.176880 containerd[1582]: 2025-12-16 12:26:58.158 [INFO][4853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0", GenerateName:"calico-apiserver-6b7bcbfb5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e774688f-af9f-49ba-93f6-0e9b13337ee0", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7bcbfb5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b7bcbfb5-5mqzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf317edbed5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:58.176880 containerd[1582]: 2025-12-16 12:26:58.158 [INFO][4853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.176880 containerd[1582]: 2025-12-16 12:26:58.158 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf317edbed5 ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.176880 containerd[1582]: 2025-12-16 12:26:58.161 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.176880 containerd[1582]: 2025-12-16 12:26:58.162 [INFO][4853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0", GenerateName:"calico-apiserver-6b7bcbfb5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e774688f-af9f-49ba-93f6-0e9b13337ee0", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7bcbfb5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd", Pod:"calico-apiserver-6b7bcbfb5-5mqzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf317edbed5", MAC:"ee:45:1b:38:6c:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:58.176880 containerd[1582]: 2025-12-16 12:26:58.171 [INFO][4853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" Namespace="calico-apiserver" Pod="calico-apiserver-6b7bcbfb5-5mqzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7bcbfb5--5mqzg-eth0" Dec 16 12:26:58.180652 systemd-networkd[1494]: calia111e775334: Gained IPv6LL Dec 16 12:26:58.193000 audit[4884]: NETFILTER_CFG table=filter:127 family=2 entries=53 op=nft_register_chain pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:58.193000 audit[4884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26640 a0=3 a1=ffffcbd97090 a2=0 a3=ffff93286fa8 items=0 ppid=4712 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.193000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:58.237663 containerd[1582]: time="2025-12-16T12:26:58.237607287Z" level=info msg="connecting to shim d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd" address="unix:///run/containerd/s/f246c785e1f4e813605c9eb05c7e7b52ed0b585154bc664bb36f6ff577f94eda" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:58.254290 kubelet[2731]: E1216 12:26:58.254224 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:58.260037 kubelet[2731]: E1216 12:26:58.259876 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" podUID="d237c895-0922-4d34-99eb-c1d5a8780e41" Dec 16 12:26:58.260037 kubelet[2731]: E1216 12:26:58.259904 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:26:58.263501 systemd[1]: Started cri-containerd-d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd.scope - libcontainer container d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd. Dec 16 12:26:58.292000 audit: BPF prog-id=239 op=LOAD Dec 16 12:26:58.294000 audit: BPF prog-id=240 op=LOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.294000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.294000 audit: BPF prog-id=241 op=LOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.294000 audit: BPF prog-id=242 op=LOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.294000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.294000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.294000 audit: BPF prog-id=243 op=LOAD Dec 16 12:26:58.294000 audit[4903]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4893 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433353130303665616632363238313832363062623438303465613061 Dec 16 12:26:58.296743 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:58.329081 containerd[1582]: time="2025-12-16T12:26:58.329040217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7bcbfb5-5mqzg,Uid:e774688f-af9f-49ba-93f6-0e9b13337ee0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d351006eaf262818260bb4804ea0adb6b6870bb57b032dc6f6f690b2f6e2b1bd\"" Dec 16 12:26:58.333237 containerd[1582]: time="2025-12-16T12:26:58.333198296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:58.339000 audit[4931]: NETFILTER_CFG table=filter:128 family=2 entries=17 op=nft_register_rule pid=4931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:58.339000 audit[4931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7101840 a2=0 a3=1 items=0 ppid=2851 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.339000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:58.353000 audit[4931]: NETFILTER_CFG table=nat:129 family=2 entries=35 op=nft_register_chain pid=4931 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:58.353000 audit[4931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff7101840 a2=0 a3=1 items=0 ppid=2851 pid=4931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:58.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:58.546137 containerd[1582]: time="2025-12-16T12:26:58.545878122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:58.547855 containerd[1582]: time="2025-12-16T12:26:58.547692701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:58.547972 containerd[1582]: time="2025-12-16T12:26:58.547722223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:58.548266 kubelet[2731]: E1216 12:26:58.548220 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:58.548359 kubelet[2731]: E1216 12:26:58.548277 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:58.548415 kubelet[2731]: E1216 12:26:58.548387 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7bcbfb5-5mqzg_calico-apiserver(e774688f-af9f-49ba-93f6-0e9b13337ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:58.548747 kubelet[2731]: E1216 12:26:58.548441 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" podUID="e774688f-af9f-49ba-93f6-0e9b13337ee0" Dec 16 12:26:59.034026 containerd[1582]: time="2025-12-16T12:26:59.033981840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-7nnp6,Uid:c0a4e673-b12d-498c-8d03-d2574bb6b967,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:26:59.035409 kubelet[2731]: E1216 12:26:59.035371 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:59.035828 containerd[1582]: time="2025-12-16T12:26:59.035765173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6zw4p,Uid:b9fe45d7-db81-4803-93c7-fa2d49931f66,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:59.077091 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Dec 16 12:26:59.179659 systemd-networkd[1494]: cali1b4e7d00924: Link UP Dec 16 12:26:59.179808 systemd-networkd[1494]: cali1b4e7d00924: Gained carrier Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.096 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0 calico-apiserver-86f4fcfc8d- calico-apiserver c0a4e673-b12d-498c-8d03-d2574bb6b967 909 0 2025-12-16 12:26:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f4fcfc8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86f4fcfc8d-7nnp6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b4e7d00924 [] [] }} ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.097 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.133 [INFO][4964] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" HandleID="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Workload="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.134 [INFO][4964] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" HandleID="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Workload="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86f4fcfc8d-7nnp6", "timestamp":"2025-12-16 12:26:59.133828456 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.134 [INFO][4964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.134 [INFO][4964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.134 [INFO][4964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.144 [INFO][4964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.150 [INFO][4964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.155 [INFO][4964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.157 [INFO][4964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.160 [INFO][4964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.160 [INFO][4964] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.162 [INFO][4964] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163 Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.166 [INFO][4964] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.172 [INFO][4964] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.172 [INFO][4964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" host="localhost" Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.172 [INFO][4964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:59.202555 containerd[1582]: 2025-12-16 12:26:59.173 [INFO][4964] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" HandleID="k8s-pod-network.10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Workload="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.203838 containerd[1582]: 2025-12-16 12:26:59.177 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0", GenerateName:"calico-apiserver-86f4fcfc8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0a4e673-b12d-498c-8d03-d2574bb6b967", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fcfc8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86f4fcfc8d-7nnp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b4e7d00924", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:59.203838 containerd[1582]: 2025-12-16 12:26:59.177 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.203838 containerd[1582]: 2025-12-16 12:26:59.177 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b4e7d00924 ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.203838 containerd[1582]: 2025-12-16 12:26:59.180 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.203838 containerd[1582]: 2025-12-16 12:26:59.181 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0", GenerateName:"calico-apiserver-86f4fcfc8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c0a4e673-b12d-498c-8d03-d2574bb6b967", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f4fcfc8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163", Pod:"calico-apiserver-86f4fcfc8d-7nnp6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b4e7d00924", MAC:"12:f1:39:dc:eb:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:59.203838 containerd[1582]: 2025-12-16 12:26:59.196 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" Namespace="calico-apiserver" Pod="calico-apiserver-86f4fcfc8d-7nnp6" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f4fcfc8d--7nnp6-eth0" Dec 16 12:26:59.220000 audit[4989]: NETFILTER_CFG table=filter:130 family=2 entries=57 op=nft_register_chain pid=4989 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:59.220000 audit[4989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27828 a0=3 a1=ffffd2931210 a2=0 a3=ffffb6f9cfa8 items=0 ppid=4712 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.220000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:59.236201 containerd[1582]: time="2025-12-16T12:26:59.236135176Z" level=info msg="connecting to shim 10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163" address="unix:///run/containerd/s/4f2265e183d1037ba94af45d32b462a34638c9b2ef469f5a5875d0f3352b1038" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:59.260272 kubelet[2731]: E1216 12:26:59.260236 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" podUID="e774688f-af9f-49ba-93f6-0e9b13337ee0" Dec 16 12:26:59.262136 kubelet[2731]: E1216 12:26:59.261648 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:59.268593 systemd[1]: Started cri-containerd-10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163.scope - libcontainer container 10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163. Dec 16 12:26:59.283000 audit: BPF prog-id=244 op=LOAD Dec 16 12:26:59.286000 audit: BPF prog-id=245 op=LOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.286000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.286000 audit: BPF prog-id=246 op=LOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.286000 audit: BPF prog-id=247 op=LOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.286000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.286000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.286000 audit: BPF prog-id=248 op=LOAD Dec 16 12:26:59.286000 audit[5010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4999 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616265343639336234623933363263643836656130316634653837 Dec 16 12:26:59.290603 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:59.318035 systemd-networkd[1494]: cali9b83d7318d0: Link UP Dec 16 12:26:59.319119 systemd-networkd[1494]: cali9b83d7318d0: Gained carrier Dec 16 12:26:59.326386 containerd[1582]: time="2025-12-16T12:26:59.326345592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f4fcfc8d-7nnp6,Uid:c0a4e673-b12d-498c-8d03-d2574bb6b967,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10abe4693b4b9362cd86ea01f4e87525880df5f7337338a2b6de28a433c3d163\"" Dec 16 12:26:59.329614 containerd[1582]: time="2025-12-16T12:26:59.329433863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.110 [INFO][4936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--6zw4p-eth0 coredns-66bc5c9577- kube-system b9fe45d7-db81-4803-93c7-fa2d49931f66 905 0 2025-12-16 12:26:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-6zw4p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b83d7318d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.110 [INFO][4936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.139 [INFO][4970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" HandleID="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Workload="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.139 [INFO][4970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" HandleID="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Workload="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000110dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-6zw4p", "timestamp":"2025-12-16 12:26:59.139362509 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.139 [INFO][4970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.173 [INFO][4970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.173 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.248 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.255 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.279 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.286 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.292 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.292 [INFO][4970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.295 [INFO][4970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4 Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.302 [INFO][4970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.310 [INFO][4970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.310 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" host="localhost" Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.310 [INFO][4970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:26:59.338974 containerd[1582]: 2025-12-16 12:26:59.311 [INFO][4970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" HandleID="k8s-pod-network.b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Workload="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.339805 containerd[1582]: 2025-12-16 12:26:59.315 [INFO][4936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--6zw4p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b9fe45d7-db81-4803-93c7-fa2d49931f66", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-6zw4p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b83d7318d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:59.339805 containerd[1582]: 2025-12-16 12:26:59.315 [INFO][4936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.339805 containerd[1582]: 2025-12-16 12:26:59.315 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b83d7318d0 ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.339805 containerd[1582]: 2025-12-16 12:26:59.319 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.339805 containerd[1582]: 2025-12-16 12:26:59.319 [INFO][4936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--6zw4p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b9fe45d7-db81-4803-93c7-fa2d49931f66", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4", Pod:"coredns-66bc5c9577-6zw4p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b83d7318d0", MAC:"be:09:2d:61:f2:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:26:59.339805 containerd[1582]: 2025-12-16 12:26:59.335 [INFO][4936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" Namespace="kube-system" Pod="coredns-66bc5c9577-6zw4p" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6zw4p-eth0" Dec 16 12:26:59.351000 audit[5044]: NETFILTER_CFG table=filter:131 family=2 entries=56 op=nft_register_chain pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:26:59.351000 audit[5044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25096 a0=3 a1=ffffcfaf22a0 a2=0 a3=ffffb6cd9fa8 items=0 ppid=4712 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.351000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:26:59.359781 containerd[1582]: time="2025-12-16T12:26:59.359731365Z" level=info msg="connecting to shim b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4" address="unix:///run/containerd/s/117a24cf958fa197f4d2af5ed52ed3cb6de6f2aaf168cc55ecc7d77ab5d81776" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:59.378000 audit[5078]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:59.378000 audit[5078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd993cb50 a2=0 a3=1 items=0 ppid=2851 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.378000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:59.390574 systemd[1]: Started cri-containerd-b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4.scope - libcontainer container b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4. Dec 16 12:26:59.390000 audit[5078]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:26:59.390000 audit[5078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd993cb50 a2=0 a3=1 items=0 ppid=2851 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:26:59.403000 audit: BPF prog-id=249 op=LOAD Dec 16 12:26:59.404000 audit: BPF prog-id=250 op=LOAD Dec 16 12:26:59.404000 audit[5065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.404000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:26:59.404000 audit[5065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.404000 audit: BPF prog-id=251 op=LOAD Dec 16 12:26:59.404000 audit[5065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.405000 audit: BPF prog-id=252 op=LOAD Dec 16 12:26:59.405000 audit[5065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.405000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:26:59.405000 audit[5065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.405000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:26:59.405000 audit[5065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.405000 audit: BPF prog-id=253 op=LOAD Dec 16 12:26:59.405000 audit[5065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5053 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232326565346530633664303664323937383835663766666332633832 Dec 16 12:26:59.407711 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:26:59.433829 containerd[1582]: time="2025-12-16T12:26:59.433786655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6zw4p,Uid:b9fe45d7-db81-4803-93c7-fa2d49931f66,Namespace:kube-system,Attempt:0,} returns sandbox id \"b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4\"" Dec 16 12:26:59.434561 kubelet[2731]: E1216 12:26:59.434532 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:26:59.442614 containerd[1582]: time="2025-12-16T12:26:59.442545029Z" level=info msg="CreateContainer within sandbox \"b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:26:59.507921 containerd[1582]: time="2025-12-16T12:26:59.507728617Z" level=info msg="Container 3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:59.514186 containerd[1582]: time="2025-12-16T12:26:59.514079611Z" level=info msg="CreateContainer within sandbox \"b22ee4e0c6d06d297885f7ffc2c82287123ce34ebbeb2f2c60ae8c57846d5ce4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd\"" Dec 16 12:26:59.514728 containerd[1582]: time="2025-12-16T12:26:59.514706778Z" level=info msg="StartContainer for \"3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd\"" Dec 16 12:26:59.515818 containerd[1582]: time="2025-12-16T12:26:59.515781498Z" level=info msg="connecting to shim 3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd" address="unix:///run/containerd/s/117a24cf958fa197f4d2af5ed52ed3cb6de6f2aaf168cc55ecc7d77ab5d81776" protocol=ttrpc version=3 Dec 16 12:26:59.540909 systemd[1]: Started cri-containerd-3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd.scope - libcontainer container 3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd. Dec 16 12:26:59.544885 containerd[1582]: time="2025-12-16T12:26:59.544840908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:26:59.546335 containerd[1582]: time="2025-12-16T12:26:59.546269775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:26:59.546429 containerd[1582]: time="2025-12-16T12:26:59.546381983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:26:59.547499 kubelet[2731]: E1216 12:26:59.546585 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:59.547499 kubelet[2731]: E1216 12:26:59.546641 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:26:59.547499 kubelet[2731]: E1216 12:26:59.546748 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-86f4fcfc8d-7nnp6_calico-apiserver(c0a4e673-b12d-498c-8d03-d2574bb6b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:26:59.547499 kubelet[2731]: E1216 12:26:59.546784 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:26:59.559000 audit: BPF prog-id=254 op=LOAD Dec 16 12:26:59.560000 audit: BPF prog-id=255 op=LOAD Dec 16 12:26:59.560000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.560000 audit: BPF prog-id=255 op=UNLOAD Dec 16 12:26:59.560000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.560000 audit: BPF prog-id=256 op=LOAD Dec 16 12:26:59.560000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.561000 audit: BPF prog-id=257 op=LOAD Dec 16 12:26:59.561000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.561000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:26:59.561000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.561000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:26:59.561000 audit[5094]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.561000 audit: BPF prog-id=258 op=LOAD Dec 16 12:26:59.561000 audit[5094]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5053 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:26:59.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330333764643561333463373261656534393661393730326636333737 Dec 16 12:26:59.588599 containerd[1582]: time="2025-12-16T12:26:59.588466166Z" level=info msg="StartContainer for \"3037dd5a34c72aee496a9702f637744e57613941c510c3687796a707cb2cb4fd\" returns successfully" Dec 16 12:26:59.715487 systemd-networkd[1494]: califf317edbed5: Gained IPv6LL Dec 16 12:27:00.039902 containerd[1582]: time="2025-12-16T12:27:00.039798995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dw5m,Uid:8ba61881-a1a2-472c-ad8a-7b1172620126,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:00.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.45:22-10.0.0.1:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:00.094463 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:39118.service - OpenSSH per-connection server daemon (10.0.0.1:39118). Dec 16 12:27:00.098076 kernel: kauditd_printk_skb: 430 callbacks suppressed Dec 16 12:27:00.098164 kernel: audit: type=1130 audit(1765888020.093:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.45:22-10.0.0.1:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:00.190000 audit[5145]: USER_ACCT pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.191815 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 39118 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:00.195000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.198379 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:00.201378 kernel: audit: type=1101 audit(1765888020.190:752): pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.201748 kernel: audit: type=1103 audit(1765888020.195:753): pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.196000 audit[5145]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffddf24f0 a2=3 a3=0 items=0 ppid=1 pid=5145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.207122 systemd-networkd[1494]: calibabb0e61e92: Link UP Dec 16 12:27:00.207470 kernel: audit: type=1006 audit(1765888020.196:754): pid=5145 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:27:00.207500 kernel: audit: type=1300 audit(1765888020.196:754): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffddf24f0 a2=3 a3=0 items=0 ppid=1 pid=5145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.196000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:00.208089 systemd-networkd[1494]: calibabb0e61e92: Gained carrier Dec 16 12:27:00.209221 kernel: audit: type=1327 audit(1765888020.196:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:00.218824 systemd-logind[1553]: New session 10 of user core. Dec 16 12:27:00.225710 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:27:00.235523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751696392.mount: Deactivated successfully. Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.106 [INFO][5130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8dw5m-eth0 csi-node-driver- calico-system 8ba61881-a1a2-472c-ad8a-7b1172620126 813 0 2025-12-16 12:26:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8dw5m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibabb0e61e92 [] [] }} ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.106 [INFO][5130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.136 [INFO][5147] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" HandleID="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Workload="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.137 [INFO][5147] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" HandleID="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Workload="localhost-k8s-csi--node--driver--8dw5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051b8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8dw5m", "timestamp":"2025-12-16 12:27:00.136927221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.137 [INFO][5147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.137 [INFO][5147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.137 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.155 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.161 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.167 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.170 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.176 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.176 [INFO][5147] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.179 [INFO][5147] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537 Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.183 [INFO][5147] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.197 [INFO][5147] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.197 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" host="localhost" Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.197 [INFO][5147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:27:00.237041 containerd[1582]: 2025-12-16 12:27:00.197 [INFO][5147] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" HandleID="k8s-pod-network.4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Workload="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.238344 containerd[1582]: 2025-12-16 12:27:00.202 [INFO][5130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8dw5m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ba61881-a1a2-472c-ad8a-7b1172620126", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8dw5m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibabb0e61e92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:27:00.238344 containerd[1582]: 2025-12-16 12:27:00.202 [INFO][5130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.238344 containerd[1582]: 2025-12-16 12:27:00.202 [INFO][5130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibabb0e61e92 ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.238344 containerd[1582]: 2025-12-16 12:27:00.208 [INFO][5130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.238344 containerd[1582]: 2025-12-16 12:27:00.210 [INFO][5130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8dw5m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ba61881-a1a2-472c-ad8a-7b1172620126", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537", Pod:"csi-node-driver-8dw5m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibabb0e61e92", MAC:"52:0e:79:3b:ab:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:27:00.238344 containerd[1582]: 2025-12-16 12:27:00.224 [INFO][5130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" Namespace="calico-system" Pod="csi-node-driver-8dw5m" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dw5m-eth0" Dec 16 12:27:00.237000 audit[5145]: USER_START pid=5145 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.239000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.245765 kernel: audit: type=1105 audit(1765888020.237:755): pid=5145 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.245850 kernel: audit: type=1103 audit(1765888020.239:756): pid=5163 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.250000 audit[5167]: NETFILTER_CFG table=filter:134 family=2 entries=64 op=nft_register_chain pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:27:00.254344 kernel: audit: type=1325 audit(1765888020.250:757): table=filter:134 family=2 entries=64 op=nft_register_chain pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:27:00.254398 kernel: audit: type=1300 audit(1765888020.250:757): arch=c00000b7 syscall=211 success=yes exit=27892 a0=3 a1=fffff8079be0 a2=0 a3=ffff887c3fa8 items=0 ppid=4712 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.250000 audit[5167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27892 a0=3 a1=fffff8079be0 a2=0 a3=ffff887c3fa8 items=0 ppid=4712 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.250000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:27:00.268979 kubelet[2731]: E1216 12:27:00.268945 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:00.275153 kubelet[2731]: E1216 12:27:00.275108 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" podUID="e774688f-af9f-49ba-93f6-0e9b13337ee0" Dec 16 12:27:00.275410 kubelet[2731]: E1216 12:27:00.275333 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:27:00.290507 kubelet[2731]: I1216 12:27:00.290376 2731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6zw4p" podStartSLOduration=44.290359702 podStartE2EDuration="44.290359702s" podCreationTimestamp="2025-12-16 12:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:00.289188217 +0000 UTC m=+51.378173130" watchObservedRunningTime="2025-12-16 12:27:00.290359702 +0000 UTC m=+51.379344575" Dec 16 12:27:00.310962 containerd[1582]: time="2025-12-16T12:27:00.309773434Z" level=info msg="connecting to shim 4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537" address="unix:///run/containerd/s/f9846d139c4a75b5cb1ae7fb9569ce5c280fe26daa3dafad38b0cb042886c2e0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:00.374608 systemd[1]: Started cri-containerd-4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537.scope - libcontainer container 4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537. Dec 16 12:27:00.456106 sshd[5163]: Connection closed by 10.0.0.1 port 39118 Dec 16 12:27:00.456746 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:00.455000 audit[5216]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:00.455000 audit[5216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffd7132a0 a2=0 a3=1 items=0 ppid=2851 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:00.458000 audit[5145]: USER_END pid=5145 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.458000 audit[5145]: CRED_DISP pid=5145 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.459000 audit[5216]: NETFILTER_CFG table=nat:136 family=2 entries=44 op=nft_register_rule pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:00.459000 audit[5216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffd7132a0 a2=0 a3=1 items=0 ppid=2851 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:00.468484 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:39118.service: Deactivated successfully. Dec 16 12:27:00.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.45:22-10.0.0.1:39118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:00.473156 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:27:00.475900 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:27:00.481435 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:39122.service - OpenSSH per-connection server daemon (10.0.0.1:39122). Dec 16 12:27:00.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.45:22-10.0.0.1:39122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:00.482000 audit: BPF prog-id=259 op=LOAD Dec 16 12:27:00.484461 systemd-logind[1553]: Removed session 10. Dec 16 12:27:00.484000 audit: BPF prog-id=260 op=LOAD Dec 16 12:27:00.484000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.485000 audit: BPF prog-id=260 op=UNLOAD Dec 16 12:27:00.485000 audit[5195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.485000 audit: BPF prog-id=261 op=LOAD Dec 16 12:27:00.485000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.485000 audit: BPF prog-id=262 op=LOAD Dec 16 12:27:00.485000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.486000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:27:00.486000 audit[5195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.486000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:27:00.486000 audit[5195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.486000 audit: BPF prog-id=263 op=LOAD Dec 16 12:27:00.486000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5182 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393061373463373466313262316535383661656436646362646132 Dec 16 12:27:00.495175 systemd-resolved[1275]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:27:00.523874 containerd[1582]: time="2025-12-16T12:27:00.523828965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dw5m,Uid:8ba61881-a1a2-472c-ad8a-7b1172620126,Namespace:calico-system,Attempt:0,} returns sandbox id \"4690a74c74f12b1e586aed6dcbda2a97c7bd322df2ef89f2b59ce6c1cc7e8537\"" Dec 16 12:27:00.526546 containerd[1582]: time="2025-12-16T12:27:00.526475998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:27:00.549000 audit[5221]: USER_ACCT pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.550957 sshd[5221]: Accepted publickey for core from 10.0.0.1 port 39122 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:00.551000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.551000 audit[5221]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe444c2b0 a2=3 a3=0 items=0 ppid=1 pid=5221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.551000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:00.552839 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:00.558244 systemd-logind[1553]: New session 11 of user core. Dec 16 12:27:00.567588 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:27:00.569000 audit[5221]: USER_START pid=5221 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.571000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.739693 containerd[1582]: time="2025-12-16T12:27:00.739614862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:00.740996 containerd[1582]: time="2025-12-16T12:27:00.740926878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:27:00.741070 containerd[1582]: time="2025-12-16T12:27:00.741003683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:00.741500 kubelet[2731]: E1216 12:27:00.741457 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:00.741572 kubelet[2731]: E1216 12:27:00.741506 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:00.741885 kubelet[2731]: E1216 12:27:00.741597 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:00.743147 containerd[1582]: time="2025-12-16T12:27:00.742701047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:27:00.797615 sshd[5233]: Connection closed by 10.0.0.1 port 39122 Dec 16 12:27:00.798139 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:00.799000 audit[5221]: USER_END pid=5221 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.799000 audit[5221]: CRED_DISP pid=5221 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.813011 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:39122.service: Deactivated successfully. Dec 16 12:27:00.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.45:22-10.0.0.1:39122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:00.817824 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:27:00.821975 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:27:00.825059 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:39132.service - OpenSSH per-connection server daemon (10.0.0.1:39132). Dec 16 12:27:00.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.45:22-10.0.0.1:39132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:00.827839 systemd-logind[1553]: Removed session 11. Dec 16 12:27:00.883000 audit[5245]: USER_ACCT pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.885097 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 39132 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:00.884000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.884000 audit[5245]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff26e4d40 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:00.884000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:00.886675 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:00.891701 systemd-logind[1553]: New session 12 of user core. Dec 16 12:27:00.905754 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:27:00.906000 audit[5245]: USER_START pid=5245 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.908000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:00.970680 containerd[1582]: time="2025-12-16T12:27:00.970626227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:00.995660 systemd-networkd[1494]: cali1b4e7d00924: Gained IPv6LL Dec 16 12:27:00.998235 containerd[1582]: time="2025-12-16T12:27:00.998111146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:27:00.998235 containerd[1582]: time="2025-12-16T12:27:00.998178671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:00.998583 kubelet[2731]: E1216 12:27:00.998536 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:00.998877 kubelet[2731]: E1216 12:27:00.998669 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:00.999009 kubelet[2731]: E1216 12:27:00.998835 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:00.999092 kubelet[2731]: E1216 12:27:00.998980 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:27:01.024289 sshd[5248]: Connection closed by 10.0.0.1 port 39132 Dec 16 12:27:01.024662 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:01.024000 audit[5245]: USER_END pid=5245 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:01.024000 audit[5245]: CRED_DISP pid=5245 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:01.028932 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:39132.service: Deactivated successfully. Dec 16 12:27:01.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.45:22-10.0.0.1:39132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:01.030877 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:27:01.032635 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:27:01.034392 systemd-logind[1553]: Removed session 12. Dec 16 12:27:01.251562 systemd-networkd[1494]: cali9b83d7318d0: Gained IPv6LL Dec 16 12:27:01.277537 kubelet[2731]: E1216 12:27:01.277498 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:01.281137 kubelet[2731]: E1216 12:27:01.279261 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:27:01.281137 kubelet[2731]: E1216 12:27:01.279517 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:27:01.443493 systemd-networkd[1494]: calibabb0e61e92: Gained IPv6LL Dec 16 12:27:01.480000 audit[5261]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:01.480000 audit[5261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc4d6a820 a2=0 a3=1 items=0 ppid=2851 pid=5261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:01.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:01.490000 audit[5261]: NETFILTER_CFG table=nat:138 family=2 entries=56 op=nft_register_chain pid=5261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:01.490000 audit[5261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc4d6a820 a2=0 a3=1 items=0 ppid=2851 pid=5261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:01.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:02.279394 kubelet[2731]: E1216 12:27:02.279362 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:02.281605 kubelet[2731]: E1216 12:27:02.281505 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:27:05.039021 containerd[1582]: time="2025-12-16T12:27:05.038953895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:27:05.249782 containerd[1582]: time="2025-12-16T12:27:05.249678011Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:05.251464 containerd[1582]: time="2025-12-16T12:27:05.251421243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:27:05.251669 containerd[1582]: time="2025-12-16T12:27:05.251503648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:05.251727 kubelet[2731]: E1216 12:27:05.251683 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:05.252095 kubelet[2731]: E1216 12:27:05.251731 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:05.252095 kubelet[2731]: E1216 12:27:05.251803 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-585749bd7b-mzpmc_calico-system(9544238e-7c32-4f71-bf15-98eec7d18a91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:05.253151 containerd[1582]: time="2025-12-16T12:27:05.253114751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:27:05.468592 containerd[1582]: time="2025-12-16T12:27:05.468547008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:05.470064 containerd[1582]: time="2025-12-16T12:27:05.469938377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:27:05.470064 containerd[1582]: time="2025-12-16T12:27:05.470025143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:05.470588 kubelet[2731]: E1216 12:27:05.470384 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:05.470588 kubelet[2731]: E1216 12:27:05.470432 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:05.470588 kubelet[2731]: E1216 12:27:05.470511 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-585749bd7b-mzpmc_calico-system(9544238e-7c32-4f71-bf15-98eec7d18a91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:05.470588 kubelet[2731]: E1216 12:27:05.470550 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585749bd7b-mzpmc" podUID="9544238e-7c32-4f71-bf15-98eec7d18a91" Dec 16 12:27:06.039661 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:60978.service - OpenSSH per-connection server daemon (10.0.0.1:60978). Dec 16 12:27:06.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.45:22-10.0.0.1:60978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:06.041024 kernel: kauditd_printk_skb: 60 callbacks suppressed Dec 16 12:27:06.041101 kernel: audit: type=1130 audit(1765888026.039:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.45:22-10.0.0.1:60978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:06.118000 audit[5277]: USER_ACCT pid=5277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.118833 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 60978 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:06.121623 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:06.120000 audit[5277]: CRED_ACQ pid=5277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.126999 kernel: audit: type=1101 audit(1765888026.118:792): pid=5277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.127089 kernel: audit: type=1103 audit(1765888026.120:793): pid=5277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.127114 kernel: audit: type=1006 audit(1765888026.120:794): pid=5277 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:27:06.126465 systemd-logind[1553]: New session 13 of user core. Dec 16 12:27:06.120000 audit[5277]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffef9d000 a2=3 a3=0 items=0 ppid=1 pid=5277 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:06.136195 kernel: audit: type=1300 audit(1765888026.120:794): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffef9d000 a2=3 a3=0 items=0 ppid=1 pid=5277 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:06.136284 kernel: audit: type=1327 audit(1765888026.120:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:06.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:06.139614 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:27:06.141000 audit[5277]: USER_START pid=5277 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.148368 kernel: audit: type=1105 audit(1765888026.141:795): pid=5277 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.148000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.153367 kernel: audit: type=1103 audit(1765888026.148:796): pid=5280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.266934 sshd[5280]: Connection closed by 10.0.0.1 port 60978 Dec 16 12:27:06.267446 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:06.270000 audit[5277]: USER_END pid=5277 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.271000 audit[5277]: CRED_DISP pid=5277 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.277017 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:60978.service: Deactivated successfully. Dec 16 12:27:06.280058 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:27:06.280263 kernel: audit: type=1106 audit(1765888026.270:797): pid=5277 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.280298 kernel: audit: type=1104 audit(1765888026.271:798): pid=5277 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:06.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.45:22-10.0.0.1:60978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:06.281497 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:27:06.283695 systemd-logind[1553]: Removed session 13. Dec 16 12:27:08.030019 containerd[1582]: time="2025-12-16T12:27:08.029866611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:27:08.302729 containerd[1582]: time="2025-12-16T12:27:08.302572345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:08.303867 containerd[1582]: time="2025-12-16T12:27:08.303785017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:27:08.304108 containerd[1582]: time="2025-12-16T12:27:08.303837060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:08.304144 kubelet[2731]: E1216 12:27:08.304096 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:08.304704 kubelet[2731]: E1216 12:27:08.304146 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:08.304704 kubelet[2731]: E1216 12:27:08.304684 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cbqqb_calico-system(3e707694-c3c3-46cb-9c34-819568db9981): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:08.304772 kubelet[2731]: E1216 12:27:08.304721 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:27:10.030763 containerd[1582]: time="2025-12-16T12:27:10.030701822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:27:10.252538 containerd[1582]: time="2025-12-16T12:27:10.252473462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:10.253764 containerd[1582]: time="2025-12-16T12:27:10.253712851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:27:10.253871 containerd[1582]: time="2025-12-16T12:27:10.253786536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:10.254414 kubelet[2731]: E1216 12:27:10.253962 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:10.254414 kubelet[2731]: E1216 12:27:10.254012 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:10.254414 kubelet[2731]: E1216 12:27:10.254120 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-686cd7448d-zpgvc_calico-system(cb75681b-286c-4a6b-a7e3-6df9c7f59d30): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:10.254414 kubelet[2731]: E1216 12:27:10.254155 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:27:11.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.45:22-10.0.0.1:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:11.284494 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:53682.service - OpenSSH per-connection server daemon (10.0.0.1:53682). Dec 16 12:27:11.287988 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:27:11.288097 kernel: audit: type=1130 audit(1765888031.283:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.45:22-10.0.0.1:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:11.350000 audit[5303]: USER_ACCT pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.351624 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 53682 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:11.353000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.355030 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:11.357559 kernel: audit: type=1101 audit(1765888031.350:801): pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.357633 kernel: audit: type=1103 audit(1765888031.353:802): pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.359591 kernel: audit: type=1006 audit(1765888031.353:803): pid=5303 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:27:11.353000 audit[5303]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffad568a0 a2=3 a3=0 items=0 ppid=1 pid=5303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:11.363102 kernel: audit: type=1300 audit(1765888031.353:803): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffad568a0 a2=3 a3=0 items=0 ppid=1 pid=5303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:11.353000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:11.364587 kernel: audit: type=1327 audit(1765888031.353:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:11.367314 systemd-logind[1553]: New session 14 of user core. Dec 16 12:27:11.378598 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:27:11.379000 audit[5303]: USER_START pid=5303 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.385343 kernel: audit: type=1105 audit(1765888031.379:804): pid=5303 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.384000 audit[5306]: CRED_ACQ pid=5306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.389370 kernel: audit: type=1103 audit(1765888031.384:805): pid=5306 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.480180 sshd[5306]: Connection closed by 10.0.0.1 port 53682 Dec 16 12:27:11.480791 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:11.480000 audit[5303]: USER_END pid=5303 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.485340 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:53682.service: Deactivated successfully. Dec 16 12:27:11.481000 audit[5303]: CRED_DISP pid=5303 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.487651 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:27:11.488479 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:27:11.488991 kernel: audit: type=1106 audit(1765888031.480:806): pid=5303 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.489030 kernel: audit: type=1104 audit(1765888031.481:807): pid=5303 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:11.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.45:22-10.0.0.1:53682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:11.490074 systemd-logind[1553]: Removed session 14. Dec 16 12:27:12.029729 containerd[1582]: time="2025-12-16T12:27:12.029558821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:12.225449 containerd[1582]: time="2025-12-16T12:27:12.225356549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:12.226523 containerd[1582]: time="2025-12-16T12:27:12.226477209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:12.226607 containerd[1582]: time="2025-12-16T12:27:12.226556533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:12.226758 kubelet[2731]: E1216 12:27:12.226710 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:12.227063 kubelet[2731]: E1216 12:27:12.226762 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:12.227114 containerd[1582]: time="2025-12-16T12:27:12.227067521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:12.228030 kubelet[2731]: E1216 12:27:12.227354 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-86f4fcfc8d-xt2gx_calico-apiserver(d237c895-0922-4d34-99eb-c1d5a8780e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:12.228030 kubelet[2731]: E1216 12:27:12.227474 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" podUID="d237c895-0922-4d34-99eb-c1d5a8780e41" Dec 16 12:27:12.436941 containerd[1582]: time="2025-12-16T12:27:12.436893442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:12.439350 containerd[1582]: time="2025-12-16T12:27:12.437901257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:12.439350 containerd[1582]: time="2025-12-16T12:27:12.437987861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:12.439510 kubelet[2731]: E1216 12:27:12.438165 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:12.439510 kubelet[2731]: E1216 12:27:12.438248 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:12.439510 kubelet[2731]: E1216 12:27:12.438816 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-86f4fcfc8d-7nnp6_calico-apiserver(c0a4e673-b12d-498c-8d03-d2574bb6b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:12.439510 kubelet[2731]: E1216 12:27:12.438855 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:27:15.032826 containerd[1582]: time="2025-12-16T12:27:15.032763664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:15.240031 containerd[1582]: time="2025-12-16T12:27:15.239972954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:15.241652 containerd[1582]: time="2025-12-16T12:27:15.241544793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:15.241652 containerd[1582]: time="2025-12-16T12:27:15.241590715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:15.242494 kubelet[2731]: E1216 12:27:15.241784 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:15.242494 kubelet[2731]: E1216 12:27:15.241838 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:15.242494 kubelet[2731]: E1216 12:27:15.241940 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7bcbfb5-5mqzg_calico-apiserver(e774688f-af9f-49ba-93f6-0e9b13337ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:15.242494 kubelet[2731]: E1216 12:27:15.242006 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" podUID="e774688f-af9f-49ba-93f6-0e9b13337ee0" Dec 16 12:27:16.030773 containerd[1582]: time="2025-12-16T12:27:16.030483486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:27:16.032992 kubelet[2731]: E1216 12:27:16.032940 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585749bd7b-mzpmc" podUID="9544238e-7c32-4f71-bf15-98eec7d18a91" Dec 16 12:27:16.236998 containerd[1582]: time="2025-12-16T12:27:16.236780291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:16.250871 containerd[1582]: time="2025-12-16T12:27:16.250800417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:27:16.250967 containerd[1582]: time="2025-12-16T12:27:16.250883621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:16.251089 kubelet[2731]: E1216 12:27:16.251056 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:16.251389 kubelet[2731]: E1216 12:27:16.251103 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:16.251389 kubelet[2731]: E1216 12:27:16.251183 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:16.252126 containerd[1582]: time="2025-12-16T12:27:16.252072999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:27:16.463091 containerd[1582]: time="2025-12-16T12:27:16.463031552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:16.464425 containerd[1582]: time="2025-12-16T12:27:16.464364737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:27:16.464542 containerd[1582]: time="2025-12-16T12:27:16.464406339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:16.464771 kubelet[2731]: E1216 12:27:16.464693 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:16.464771 kubelet[2731]: E1216 12:27:16.464746 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:16.464878 kubelet[2731]: E1216 12:27:16.464820 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:16.464878 kubelet[2731]: E1216 12:27:16.464859 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:27:16.496159 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:53692.service - OpenSSH per-connection server daemon (10.0.0.1:53692). Dec 16 12:27:16.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.45:22-10.0.0.1:53692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:16.499635 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:27:16.499739 kernel: audit: type=1130 audit(1765888036.494:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.45:22-10.0.0.1:53692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:16.571000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.572656 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 53692 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:16.576358 kernel: audit: type=1101 audit(1765888036.571:810): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.575000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.577266 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:16.581422 kernel: audit: type=1103 audit(1765888036.575:811): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.581511 kernel: audit: type=1006 audit(1765888036.575:812): pid=5319 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:27:16.575000 audit[5319]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc82a1190 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:16.584910 kernel: audit: type=1300 audit(1765888036.575:812): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc82a1190 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:16.575000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:16.586358 kernel: audit: type=1327 audit(1765888036.575:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:16.589399 systemd-logind[1553]: New session 15 of user core. Dec 16 12:27:16.598360 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:27:16.600000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.604000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.609722 kernel: audit: type=1105 audit(1765888036.600:813): pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.609808 kernel: audit: type=1103 audit(1765888036.604:814): pid=5322 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.689022 sshd[5322]: Connection closed by 10.0.0.1 port 53692 Dec 16 12:27:16.689426 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:16.689000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.693860 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:53692.service: Deactivated successfully. Dec 16 12:27:16.689000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.696184 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:27:16.697287 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:27:16.697817 kernel: audit: type=1106 audit(1765888036.689:815): pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.698109 kernel: audit: type=1104 audit(1765888036.689:816): pid=5319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:16.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.45:22-10.0.0.1:53692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:16.698773 systemd-logind[1553]: Removed session 15. Dec 16 12:27:21.029830 kubelet[2731]: E1216 12:27:21.029763 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:27:21.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.45:22-10.0.0.1:46500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:21.703836 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:46500.service - OpenSSH per-connection server daemon (10.0.0.1:46500). Dec 16 12:27:21.704813 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:27:21.704859 kernel: audit: type=1130 audit(1765888041.702:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.45:22-10.0.0.1:46500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:21.785000 audit[5345]: USER_ACCT pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.786968 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 46500 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:21.788929 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:21.787000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.792858 kernel: audit: type=1101 audit(1765888041.785:819): pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.792960 kernel: audit: type=1103 audit(1765888041.787:820): pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.792981 kernel: audit: type=1006 audit(1765888041.787:821): pid=5345 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:27:21.794665 kernel: audit: type=1300 audit(1765888041.787:821): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7f8f0e0 a2=3 a3=0 items=0 ppid=1 pid=5345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:21.787000 audit[5345]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7f8f0e0 a2=3 a3=0 items=0 ppid=1 pid=5345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:21.796238 systemd-logind[1553]: New session 16 of user core. Dec 16 12:27:21.787000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:21.799751 kernel: audit: type=1327 audit(1765888041.787:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:21.807816 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:27:21.809000 audit[5345]: USER_START pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.813000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.819363 kernel: audit: type=1105 audit(1765888041.809:822): pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.819477 kernel: audit: type=1103 audit(1765888041.813:823): pid=5348 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.911707 sshd[5348]: Connection closed by 10.0.0.1 port 46500 Dec 16 12:27:21.912385 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:21.912000 audit[5345]: USER_END pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.920770 kernel: audit: type=1106 audit(1765888041.912:824): pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.920911 kernel: audit: type=1104 audit(1765888041.912:825): pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.912000 audit[5345]: CRED_DISP pid=5345 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:21.928013 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:46500.service: Deactivated successfully. Dec 16 12:27:21.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.45:22-10.0.0.1:46500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:21.930204 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:27:21.931436 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:27:21.935928 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:46508.service - OpenSSH per-connection server daemon (10.0.0.1:46508). Dec 16 12:27:21.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.45:22-10.0.0.1:46508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:21.936918 systemd-logind[1553]: Removed session 16. Dec 16 12:27:22.002000 audit[5363]: USER_ACCT pid=5363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.004215 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 46508 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:22.003000 audit[5363]: CRED_ACQ pid=5363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.004000 audit[5363]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb85b0f0 a2=3 a3=0 items=0 ppid=1 pid=5363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:22.004000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:22.005718 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:22.010145 systemd-logind[1553]: New session 17 of user core. Dec 16 12:27:22.020574 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:27:22.021000 audit[5363]: USER_START pid=5363 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.023000 audit[5366]: CRED_ACQ pid=5366 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.253406 sshd[5366]: Connection closed by 10.0.0.1 port 46508 Dec 16 12:27:22.253991 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:22.255000 audit[5363]: USER_END pid=5363 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.255000 audit[5363]: CRED_DISP pid=5363 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.266025 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:46508.service: Deactivated successfully. Dec 16 12:27:22.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.45:22-10.0.0.1:46508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:22.269152 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:27:22.270040 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:27:22.273521 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:46518.service - OpenSSH per-connection server daemon (10.0.0.1:46518). Dec 16 12:27:22.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.45:22-10.0.0.1:46518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:22.274315 systemd-logind[1553]: Removed session 17. Dec 16 12:27:22.342000 audit[5378]: USER_ACCT pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.343547 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 46518 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:22.344000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.344000 audit[5378]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1254900 a2=3 a3=0 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:22.344000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:22.344779 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:22.350226 systemd-logind[1553]: New session 18 of user core. Dec 16 12:27:22.359576 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:27:22.362000 audit[5378]: USER_START pid=5378 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.365000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:22.997000 audit[5396]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:22.997000 audit[5396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffec94f930 a2=0 a3=1 items=0 ppid=2851 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:22.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:23.003334 sshd[5381]: Connection closed by 10.0.0.1 port 46518 Dec 16 12:27:23.003855 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:23.004000 audit[5378]: USER_END pid=5378 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.004000 audit[5378]: CRED_DISP pid=5378 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.005000 audit[5396]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:23.005000 audit[5396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffec94f930 a2=0 a3=1 items=0 ppid=2851 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:23.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:23.012728 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:46518.service: Deactivated successfully. Dec 16 12:27:23.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.45:22-10.0.0.1:46518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:23.016179 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:27:23.017396 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:27:23.021192 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:46522.service - OpenSSH per-connection server daemon (10.0.0.1:46522). Dec 16 12:27:23.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.45:22-10.0.0.1:46522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:23.023656 systemd-logind[1553]: Removed session 18. Dec 16 12:27:23.032961 kubelet[2731]: E1216 12:27:23.032455 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:27:23.085000 audit[5401]: USER_ACCT pid=5401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.086452 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 46522 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:23.087000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.087000 audit[5401]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff7c8520 a2=3 a3=0 items=0 ppid=1 pid=5401 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:23.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:23.088191 sshd-session[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:23.095374 systemd-logind[1553]: New session 19 of user core. Dec 16 12:27:23.105608 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:27:23.108000 audit[5401]: USER_START pid=5401 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.110000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.366389 sshd[5404]: Connection closed by 10.0.0.1 port 46522 Dec 16 12:27:23.368710 sshd-session[5401]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:23.370000 audit[5401]: USER_END pid=5401 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.370000 audit[5401]: CRED_DISP pid=5401 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.379444 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:46522.service: Deactivated successfully. Dec 16 12:27:23.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.45:22-10.0.0.1:46522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:23.382659 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:27:23.384382 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:27:23.386929 systemd-logind[1553]: Removed session 19. Dec 16 12:27:23.389644 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:46530.service - OpenSSH per-connection server daemon (10.0.0.1:46530). Dec 16 12:27:23.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.45:22-10.0.0.1:46530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:23.470000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.471283 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 46530 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:23.472000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.472000 audit[5416]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcccd2120 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:23.472000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:23.472804 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:23.478420 systemd-logind[1553]: New session 20 of user core. Dec 16 12:27:23.483604 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:27:23.486000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.487000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.583375 sshd[5419]: Connection closed by 10.0.0.1 port 46530 Dec 16 12:27:23.583743 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:23.585000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.585000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:23.588624 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:46530.service: Deactivated successfully. Dec 16 12:27:23.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.45:22-10.0.0.1:46530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:23.592302 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:27:23.593084 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:27:23.594106 systemd-logind[1553]: Removed session 20. Dec 16 12:27:24.027000 audit[5433]: NETFILTER_CFG table=filter:141 family=2 entries=38 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:24.027000 audit[5433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffec28a330 a2=0 a3=1 items=0 ppid=2851 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:24.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:24.039000 audit[5433]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:24.039000 audit[5433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffec28a330 a2=0 a3=1 items=0 ppid=2851 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:24.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:24.684556 kubelet[2731]: E1216 12:27:24.684517 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:25.030613 kubelet[2731]: E1216 12:27:25.030458 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:27:28.030102 kubelet[2731]: E1216 12:27:28.030023 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-xt2gx" podUID="d237c895-0922-4d34-99eb-c1d5a8780e41" Dec 16 12:27:28.032327 kubelet[2731]: E1216 12:27:28.032272 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126" Dec 16 12:27:28.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.45:22-10.0.0.1:46542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:28.604069 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:46542.service - OpenSSH per-connection server daemon (10.0.0.1:46542). Dec 16 12:27:28.607267 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:27:28.607337 kernel: audit: type=1130 audit(1765888048.603:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.45:22-10.0.0.1:46542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:28.676000 audit[5467]: USER_ACCT pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.676969 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 46542 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:28.678946 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:28.678000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.683487 kernel: audit: type=1101 audit(1765888048.676:868): pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.683584 kernel: audit: type=1103 audit(1765888048.678:869): pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.685954 kernel: audit: type=1006 audit(1765888048.678:870): pid=5467 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 12:27:28.678000 audit[5467]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe621c6e0 a2=3 a3=0 items=0 ppid=1 pid=5467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:28.689373 systemd-logind[1553]: New session 21 of user core. Dec 16 12:27:28.691796 kernel: audit: type=1300 audit(1765888048.678:870): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe621c6e0 a2=3 a3=0 items=0 ppid=1 pid=5467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:28.691839 kernel: audit: type=1327 audit(1765888048.678:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:28.678000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:28.704583 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:27:28.706000 audit[5467]: USER_START pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.710000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.714274 kernel: audit: type=1105 audit(1765888048.706:871): pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.714416 kernel: audit: type=1103 audit(1765888048.710:872): pid=5470 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.815983 sshd[5470]: Connection closed by 10.0.0.1 port 46542 Dec 16 12:27:28.816865 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:28.817000 audit[5467]: USER_END pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.820578 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:46542.service: Deactivated successfully. Dec 16 12:27:28.817000 audit[5467]: CRED_DISP pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.823210 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:27:28.825160 kernel: audit: type=1106 audit(1765888048.817:873): pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.825411 kernel: audit: type=1104 audit(1765888048.817:874): pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:28.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.45:22-10.0.0.1:46542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:28.825395 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:27:28.827409 systemd-logind[1553]: Removed session 21. Dec 16 12:27:29.034621 kubelet[2731]: E1216 12:27:29.033170 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7bcbfb5-5mqzg" podUID="e774688f-af9f-49ba-93f6-0e9b13337ee0" Dec 16 12:27:30.029686 kubelet[2731]: E1216 12:27:30.028928 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:30.030881 kubelet[2731]: E1216 12:27:30.030664 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:30.649000 audit[5485]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:30.649000 audit[5485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5f7b990 a2=0 a3=1 items=0 ppid=2851 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:30.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:30.662000 audit[5485]: NETFILTER_CFG table=nat:144 family=2 entries=104 op=nft_register_chain pid=5485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:27:30.662000 audit[5485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd5f7b990 a2=0 a3=1 items=0 ppid=2851 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:30.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:27:31.030144 kubelet[2731]: E1216 12:27:31.029899 2731 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:27:31.032132 containerd[1582]: time="2025-12-16T12:27:31.032064284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:27:31.250477 containerd[1582]: time="2025-12-16T12:27:31.250413137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:31.252168 containerd[1582]: time="2025-12-16T12:27:31.252122397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:27:31.252256 containerd[1582]: time="2025-12-16T12:27:31.252221961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:31.252809 kubelet[2731]: E1216 12:27:31.252438 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:31.252809 kubelet[2731]: E1216 12:27:31.252500 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:27:31.252809 kubelet[2731]: E1216 12:27:31.252589 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-585749bd7b-mzpmc_calico-system(9544238e-7c32-4f71-bf15-98eec7d18a91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:31.253900 containerd[1582]: time="2025-12-16T12:27:31.253876019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:27:31.470698 containerd[1582]: time="2025-12-16T12:27:31.470597735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:31.473042 containerd[1582]: time="2025-12-16T12:27:31.472913856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:27:31.473042 containerd[1582]: time="2025-12-16T12:27:31.472971938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:31.473546 kubelet[2731]: E1216 12:27:31.473313 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:31.473546 kubelet[2731]: E1216 12:27:31.473385 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:27:31.473546 kubelet[2731]: E1216 12:27:31.473467 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-585749bd7b-mzpmc_calico-system(9544238e-7c32-4f71-bf15-98eec7d18a91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:31.473546 kubelet[2731]: E1216 12:27:31.473507 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585749bd7b-mzpmc" podUID="9544238e-7c32-4f71-bf15-98eec7d18a91" Dec 16 12:27:33.031366 containerd[1582]: time="2025-12-16T12:27:33.031302661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:27:33.255377 containerd[1582]: time="2025-12-16T12:27:33.255306125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:33.256811 containerd[1582]: time="2025-12-16T12:27:33.256766734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:27:33.256887 containerd[1582]: time="2025-12-16T12:27:33.256805975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:33.257066 kubelet[2731]: E1216 12:27:33.257025 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:33.257402 kubelet[2731]: E1216 12:27:33.257093 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:27:33.257402 kubelet[2731]: E1216 12:27:33.257188 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-686cd7448d-zpgvc_calico-system(cb75681b-286c-4a6b-a7e3-6df9c7f59d30): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:33.257402 kubelet[2731]: E1216 12:27:33.257222 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-686cd7448d-zpgvc" podUID="cb75681b-286c-4a6b-a7e3-6df9c7f59d30" Dec 16 12:27:33.833291 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:56730.service - OpenSSH per-connection server daemon (10.0.0.1:56730). Dec 16 12:27:33.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.45:22-10.0.0.1:56730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:33.836794 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:27:33.836908 kernel: audit: type=1130 audit(1765888053.832:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.45:22-10.0.0.1:56730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:33.899000 audit[5487]: USER_ACCT pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.900584 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 56730 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:33.902000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.904212 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:33.906948 kernel: audit: type=1101 audit(1765888053.899:879): pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.907013 kernel: audit: type=1103 audit(1765888053.902:880): pid=5487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.909196 kernel: audit: type=1006 audit(1765888053.902:881): pid=5487 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 12:27:33.909283 kernel: audit: type=1300 audit(1765888053.902:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccc7d0f0 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:33.902000 audit[5487]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccc7d0f0 a2=3 a3=0 items=0 ppid=1 pid=5487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:33.912511 kernel: audit: type=1327 audit(1765888053.902:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:33.902000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:33.911827 systemd-logind[1553]: New session 22 of user core. Dec 16 12:27:33.923626 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:27:33.924000 audit[5487]: USER_START pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.928000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.933164 kernel: audit: type=1105 audit(1765888053.924:882): pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:33.933275 kernel: audit: type=1103 audit(1765888053.928:883): pid=5490 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:34.060358 sshd[5490]: Connection closed by 10.0.0.1 port 56730 Dec 16 12:27:34.060682 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:34.063000 audit[5487]: USER_END pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:34.068815 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:56730.service: Deactivated successfully. Dec 16 12:27:34.063000 audit[5487]: CRED_DISP pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:34.072672 kernel: audit: type=1106 audit(1765888054.063:884): pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:34.072765 kernel: audit: type=1104 audit(1765888054.063:885): pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:34.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.45:22-10.0.0.1:56730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:34.073765 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:27:34.076078 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:27:34.077200 systemd-logind[1553]: Removed session 22. Dec 16 12:27:36.032108 containerd[1582]: time="2025-12-16T12:27:36.032061684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:27:36.253534 containerd[1582]: time="2025-12-16T12:27:36.253458837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:36.254578 containerd[1582]: time="2025-12-16T12:27:36.254532831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:27:36.257414 containerd[1582]: time="2025-12-16T12:27:36.254609673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:36.257539 kubelet[2731]: E1216 12:27:36.254789 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:36.257539 kubelet[2731]: E1216 12:27:36.254835 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:27:36.257539 kubelet[2731]: E1216 12:27:36.254902 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cbqqb_calico-system(3e707694-c3c3-46cb-9c34-819568db9981): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:36.257539 kubelet[2731]: E1216 12:27:36.254934 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cbqqb" podUID="3e707694-c3c3-46cb-9c34-819568db9981" Dec 16 12:27:39.031187 containerd[1582]: time="2025-12-16T12:27:39.031118269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:27:39.075222 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:56734.service - OpenSSH per-connection server daemon (10.0.0.1:56734). Dec 16 12:27:39.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.45:22-10.0.0.1:56734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:39.076341 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:27:39.076390 kernel: audit: type=1130 audit(1765888059.074:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.45:22-10.0.0.1:56734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:39.138000 audit[5509]: USER_ACCT pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.141448 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 56734 ssh2: RSA SHA256:/9/2GUFTAM1LEKsLoZJAJSZa/nSu8odb5SsTJ4rriDM Dec 16 12:27:39.144343 kernel: audit: type=1101 audit(1765888059.138:888): pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.143000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.145068 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:39.149360 kernel: audit: type=1103 audit(1765888059.143:889): pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.149448 kernel: audit: type=1006 audit(1765888059.143:890): pid=5509 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:27:39.149469 kernel: audit: type=1300 audit(1765888059.143:890): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9a2c850 a2=3 a3=0 items=0 ppid=1 pid=5509 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:39.143000 audit[5509]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9a2c850 a2=3 a3=0 items=0 ppid=1 pid=5509 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:27:39.143000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:39.157029 kernel: audit: type=1327 audit(1765888059.143:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:27:39.160894 systemd-logind[1553]: New session 23 of user core. Dec 16 12:27:39.171669 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:27:39.174000 audit[5509]: USER_START pid=5509 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.177000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.182281 kernel: audit: type=1105 audit(1765888059.174:891): pid=5509 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.182400 kernel: audit: type=1103 audit(1765888059.177:892): pid=5512 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.245934 containerd[1582]: time="2025-12-16T12:27:39.245874157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:39.247174 containerd[1582]: time="2025-12-16T12:27:39.247111674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:27:39.247293 containerd[1582]: time="2025-12-16T12:27:39.247170356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:39.247606 kubelet[2731]: E1216 12:27:39.247537 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:39.247606 kubelet[2731]: E1216 12:27:39.247588 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:27:39.248272 kubelet[2731]: E1216 12:27:39.248063 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-86f4fcfc8d-7nnp6_calico-apiserver(c0a4e673-b12d-498c-8d03-d2574bb6b967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:39.248272 kubelet[2731]: E1216 12:27:39.248121 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f4fcfc8d-7nnp6" podUID="c0a4e673-b12d-498c-8d03-d2574bb6b967" Dec 16 12:27:39.328050 sshd[5512]: Connection closed by 10.0.0.1 port 56734 Dec 16 12:27:39.326782 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:39.329000 audit[5509]: USER_END pid=5509 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.333745 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:56734.service: Deactivated successfully. Dec 16 12:27:39.329000 audit[5509]: CRED_DISP pid=5509 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.337164 kernel: audit: type=1106 audit(1765888059.329:893): pid=5509 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.337271 kernel: audit: type=1104 audit(1765888059.329:894): pid=5509 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:27:39.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.45:22-10.0.0.1:56734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:27:39.339769 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:27:39.342472 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:27:39.343841 systemd-logind[1553]: Removed session 23. Dec 16 12:27:40.029711 containerd[1582]: time="2025-12-16T12:27:40.029553137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:27:40.269563 containerd[1582]: time="2025-12-16T12:27:40.269486896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:40.281308 containerd[1582]: time="2025-12-16T12:27:40.281155882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:27:40.281308 containerd[1582]: time="2025-12-16T12:27:40.281267445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:40.281510 kubelet[2731]: E1216 12:27:40.281469 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:40.282226 kubelet[2731]: E1216 12:27:40.281516 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:27:40.282226 kubelet[2731]: E1216 12:27:40.281587 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:40.282856 containerd[1582]: time="2025-12-16T12:27:40.282808051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:27:40.508786 containerd[1582]: time="2025-12-16T12:27:40.508702273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:27:40.509946 containerd[1582]: time="2025-12-16T12:27:40.509877148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:27:40.510003 containerd[1582]: time="2025-12-16T12:27:40.509946630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:27:40.510195 kubelet[2731]: E1216 12:27:40.510138 2731 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:40.510259 kubelet[2731]: E1216 12:27:40.510191 2731 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:27:40.510304 kubelet[2731]: E1216 12:27:40.510269 2731 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8dw5m_calico-system(8ba61881-a1a2-472c-ad8a-7b1172620126): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:27:40.510419 kubelet[2731]: E1216 12:27:40.510369 2731 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dw5m" podUID="8ba61881-a1a2-472c-ad8a-7b1172620126"