Jan 14 05:43:20.865146 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 03:30:44 -00 2026 Jan 14 05:43:20.865181 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 05:43:20.865196 kernel: BIOS-provided physical RAM map: Jan 14 05:43:20.865204 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 05:43:20.865215 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 05:43:20.865225 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 05:43:20.865237 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 14 05:43:20.865249 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 14 05:43:20.865255 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 05:43:20.865261 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 05:43:20.865270 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 05:43:20.865276 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 05:43:20.865282 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 05:43:20.865288 kernel: NX (Execute Disable) protection: active Jan 14 05:43:20.865296 kernel: APIC: Static calls initialized Jan 14 05:43:20.865304 kernel: SMBIOS 2.8 present. Jan 14 05:43:20.865311 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 14 05:43:20.865317 kernel: DMI: Memory slots populated: 1/1 Jan 14 05:43:20.865323 kernel: Hypervisor detected: KVM Jan 14 05:43:20.865330 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 05:43:20.865336 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 05:43:20.865342 kernel: kvm-clock: using sched offset of 5587196953 cycles Jan 14 05:43:20.865349 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 05:43:20.865356 kernel: tsc: Detected 2445.426 MHz processor Jan 14 05:43:20.865363 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 05:43:20.865372 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 05:43:20.865379 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 14 05:43:20.865385 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 05:43:20.865392 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 05:43:20.865399 kernel: Using GB pages for direct mapping Jan 14 05:43:20.865406 kernel: ACPI: Early table checksum verification disabled Jan 14 05:43:20.865413 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 14 05:43:20.865421 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865428 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865435 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865441 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 14 05:43:20.865448 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865455 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865462 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865471 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 05:43:20.865481 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 14 05:43:20.865488 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 14 05:43:20.865495 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 14 05:43:20.865502 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 14 05:43:20.865642 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 14 05:43:20.865654 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 14 05:43:20.865661 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 14 05:43:20.865668 kernel: No NUMA configuration found Jan 14 05:43:20.865676 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 14 05:43:20.865683 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 14 05:43:20.865694 kernel: Zone ranges: Jan 14 05:43:20.865701 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 05:43:20.865708 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 14 05:43:20.865715 kernel: Normal empty Jan 14 05:43:20.865722 kernel: Device empty Jan 14 05:43:20.865729 kernel: Movable zone start for each node Jan 14 05:43:20.865735 kernel: Early memory node ranges Jan 14 05:43:20.865742 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 05:43:20.865752 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 14 05:43:20.865759 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 14 05:43:20.865835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 05:43:20.865851 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 05:43:20.865864 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 14 05:43:20.865871 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 05:43:20.865878 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 05:43:20.865885 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 05:43:20.865896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 05:43:20.865903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 05:43:20.865910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 05:43:20.865917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 05:43:20.865924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 05:43:20.865931 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 05:43:20.865938 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 05:43:20.865947 kernel: TSC deadline timer available Jan 14 05:43:20.865954 kernel: CPU topo: Max. logical packages: 1 Jan 14 05:43:20.865961 kernel: CPU topo: Max. logical dies: 1 Jan 14 05:43:20.865968 kernel: CPU topo: Max. dies per package: 1 Jan 14 05:43:20.865974 kernel: CPU topo: Max. threads per core: 1 Jan 14 05:43:20.865981 kernel: CPU topo: Num. cores per package: 4 Jan 14 05:43:20.865988 kernel: CPU topo: Num. threads per package: 4 Jan 14 05:43:20.865995 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 05:43:20.866004 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 05:43:20.866011 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 05:43:20.866018 kernel: kvm-guest: setup PV sched yield Jan 14 05:43:20.866025 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 05:43:20.866032 kernel: Booting paravirtualized kernel on KVM Jan 14 05:43:20.866039 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 05:43:20.866046 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 05:43:20.866055 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 05:43:20.866062 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 05:43:20.866069 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 05:43:20.866076 kernel: kvm-guest: PV spinlocks enabled Jan 14 05:43:20.866083 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 05:43:20.866091 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 05:43:20.866098 kernel: random: crng init done Jan 14 05:43:20.866107 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 05:43:20.866114 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 05:43:20.866121 kernel: Fallback order for Node 0: 0 Jan 14 05:43:20.866128 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 14 05:43:20.866135 kernel: Policy zone: DMA32 Jan 14 05:43:20.866142 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 05:43:20.866149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 05:43:20.866158 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 05:43:20.866165 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 05:43:20.866172 kernel: Dynamic Preempt: voluntary Jan 14 05:43:20.866179 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 05:43:20.866190 kernel: rcu: RCU event tracing is enabled. Jan 14 05:43:20.866197 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 05:43:20.866204 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 05:43:20.866213 kernel: Rude variant of Tasks RCU enabled. Jan 14 05:43:20.866220 kernel: Tracing variant of Tasks RCU enabled. Jan 14 05:43:20.866228 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 05:43:20.866235 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 05:43:20.866242 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 05:43:20.866249 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 05:43:20.866256 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 05:43:20.866263 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 05:43:20.866272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 05:43:20.866286 kernel: Console: colour VGA+ 80x25 Jan 14 05:43:20.866295 kernel: printk: legacy console [ttyS0] enabled Jan 14 05:43:20.866302 kernel: ACPI: Core revision 20240827 Jan 14 05:43:20.866309 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 05:43:20.866316 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 05:43:20.866324 kernel: x2apic enabled Jan 14 05:43:20.866331 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 05:43:20.866338 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 05:43:20.866348 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 05:43:20.866355 kernel: kvm-guest: setup PV IPIs Jan 14 05:43:20.866363 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 05:43:20.866370 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 05:43:20.866380 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 05:43:20.866387 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 05:43:20.866394 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 05:43:20.866402 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 05:43:20.866409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 05:43:20.866416 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 05:43:20.866424 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 05:43:20.866433 kernel: Speculative Store Bypass: Vulnerable Jan 14 05:43:20.866440 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 05:43:20.866448 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 05:43:20.866455 kernel: active return thunk: srso_alias_return_thunk Jan 14 05:43:20.866462 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 05:43:20.866470 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 05:43:20.866479 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 05:43:20.866486 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 05:43:20.866494 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 05:43:20.866501 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 05:43:20.866508 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 05:43:20.866632 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 05:43:20.866639 kernel: Freeing SMP alternatives memory: 32K Jan 14 05:43:20.866650 kernel: pid_max: default: 32768 minimum: 301 Jan 14 05:43:20.866657 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 05:43:20.866664 kernel: landlock: Up and running. Jan 14 05:43:20.866672 kernel: SELinux: Initializing. Jan 14 05:43:20.866679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 05:43:20.866686 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 05:43:20.866694 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 05:43:20.866703 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 05:43:20.866711 kernel: signal: max sigframe size: 1776 Jan 14 05:43:20.866718 kernel: rcu: Hierarchical SRCU implementation. Jan 14 05:43:20.866725 kernel: rcu: Max phase no-delay instances is 400. Jan 14 05:43:20.866733 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 05:43:20.866740 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 05:43:20.866747 kernel: smp: Bringing up secondary CPUs ... Jan 14 05:43:20.866755 kernel: smpboot: x86: Booting SMP configuration: Jan 14 05:43:20.866764 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 05:43:20.866832 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 05:43:20.866840 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 05:43:20.866848 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 14 05:43:20.866855 kernel: devtmpfs: initialized Jan 14 05:43:20.866862 kernel: x86/mm: Memory block size: 128MB Jan 14 05:43:20.866870 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 05:43:20.866880 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 05:43:20.866887 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 05:43:20.866895 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 05:43:20.866902 kernel: audit: initializing netlink subsys (disabled) Jan 14 05:43:20.866909 kernel: audit: type=2000 audit(1768369393.585:1): state=initialized audit_enabled=0 res=1 Jan 14 05:43:20.866917 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 05:43:20.866924 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 05:43:20.866934 kernel: cpuidle: using governor menu Jan 14 05:43:20.866941 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 05:43:20.866948 kernel: dca service started, version 1.12.1 Jan 14 05:43:20.866955 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 05:43:20.866963 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 05:43:20.866970 kernel: PCI: Using configuration type 1 for base access Jan 14 05:43:20.866978 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 05:43:20.866987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 05:43:20.866994 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 05:43:20.867002 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 05:43:20.867009 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 05:43:20.867016 kernel: ACPI: Added _OSI(Module Device) Jan 14 05:43:20.867023 kernel: ACPI: Added _OSI(Processor Device) Jan 14 05:43:20.867030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 05:43:20.867040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 05:43:20.867047 kernel: ACPI: Interpreter enabled Jan 14 05:43:20.867054 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 05:43:20.867061 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 05:43:20.867069 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 05:43:20.867076 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 05:43:20.867083 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 05:43:20.867093 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 05:43:20.867337 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 05:43:20.867636 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 05:43:20.867890 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 05:43:20.867902 kernel: PCI host bridge to bus 0000:00 Jan 14 05:43:20.868074 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 05:43:20.868237 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 05:43:20.868393 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 05:43:20.868765 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 14 05:43:20.868998 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 05:43:20.869154 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 14 05:43:20.869315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 05:43:20.869502 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 05:43:20.869958 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 05:43:20.870137 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 14 05:43:20.870306 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 14 05:43:20.870474 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 14 05:43:20.870863 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 05:43:20.871093 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 14 05:43:20.871270 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 05:43:20.871437 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 05:43:20.871832 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 14 05:43:20.872017 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 14 05:43:20.872194 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 05:43:20.872361 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 14 05:43:20.872655 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 14 05:43:20.872910 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 14 05:43:20.873091 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 05:43:20.873272 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 14 05:43:20.873458 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 14 05:43:20.873956 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 14 05:43:20.874219 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 14 05:43:20.874461 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 05:43:20.874856 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 05:43:20.875033 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 13671 usecs Jan 14 05:43:20.875210 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 05:43:20.875376 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 14 05:43:20.875702 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 14 05:43:20.875953 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 05:43:20.876129 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 05:43:20.876139 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 05:43:20.876147 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 05:43:20.876154 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 05:43:20.876162 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 05:43:20.876169 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 05:43:20.876177 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 05:43:20.876188 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 05:43:20.876195 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 05:43:20.876203 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 05:43:20.876211 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 05:43:20.876218 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 05:43:20.876226 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 05:43:20.876233 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 05:43:20.876243 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 05:43:20.876250 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 05:43:20.876257 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 05:43:20.876265 kernel: iommu: Default domain type: Translated Jan 14 05:43:20.876272 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 05:43:20.876280 kernel: PCI: Using ACPI for IRQ routing Jan 14 05:43:20.876287 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 05:43:20.876297 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 05:43:20.876304 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 14 05:43:20.876469 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 05:43:20.876862 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 05:43:20.877032 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 05:43:20.877042 kernel: vgaarb: loaded Jan 14 05:43:20.877050 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 05:43:20.877062 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 05:43:20.877070 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 05:43:20.877077 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 05:43:20.877085 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 05:43:20.877093 kernel: pnp: PnP ACPI init Jan 14 05:43:20.877271 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 05:43:20.877286 kernel: pnp: PnP ACPI: found 6 devices Jan 14 05:43:20.877297 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 05:43:20.877305 kernel: NET: Registered PF_INET protocol family Jan 14 05:43:20.877312 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 05:43:20.877320 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 05:43:20.877327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 05:43:20.877335 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 05:43:20.877344 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 05:43:20.877352 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 05:43:20.877360 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 05:43:20.877367 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 05:43:20.877375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 05:43:20.877382 kernel: NET: Registered PF_XDP protocol family Jan 14 05:43:20.877664 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 05:43:20.877906 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 05:43:20.878063 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 05:43:20.878216 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 14 05:43:20.878370 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 05:43:20.878643 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 14 05:43:20.878657 kernel: PCI: CLS 0 bytes, default 64 Jan 14 05:43:20.878665 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 05:43:20.878677 kernel: Initialise system trusted keyrings Jan 14 05:43:20.878684 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 05:43:20.878692 kernel: Key type asymmetric registered Jan 14 05:43:20.878699 kernel: Asymmetric key parser 'x509' registered Jan 14 05:43:20.878707 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 05:43:20.878714 kernel: io scheduler mq-deadline registered Jan 14 05:43:20.878722 kernel: io scheduler kyber registered Jan 14 05:43:20.878731 kernel: io scheduler bfq registered Jan 14 05:43:20.878739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 05:43:20.878747 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 05:43:20.878755 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 05:43:20.878762 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 05:43:20.878839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 05:43:20.878847 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 05:43:20.878855 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 05:43:20.878865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 05:43:20.878872 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 05:43:20.879059 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 05:43:20.879310 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 05:43:20.879633 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T05:43:17 UTC (1768369397) Jan 14 05:43:20.879650 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 14 05:43:20.879902 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 14 05:43:20.879914 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 05:43:20.879922 kernel: NET: Registered PF_INET6 protocol family Jan 14 05:43:20.879930 kernel: Segment Routing with IPv6 Jan 14 05:43:20.879937 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 05:43:20.879945 kernel: NET: Registered PF_PACKET protocol family Jan 14 05:43:20.879952 kernel: Key type dns_resolver registered Jan 14 05:43:20.879963 kernel: IPI shorthand broadcast: enabled Jan 14 05:43:20.879971 kernel: sched_clock: Marking stable (3867053512, 478972315)->(4726573416, -380547589) Jan 14 05:43:20.879978 kernel: registered taskstats version 1 Jan 14 05:43:20.879986 kernel: Loading compiled-in X.509 certificates Jan 14 05:43:20.879994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 447f89388dd1db788444733bd6b00fe574646ee9' Jan 14 05:43:20.880001 kernel: Demotion targets for Node 0: null Jan 14 05:43:20.880009 kernel: Key type .fscrypt registered Jan 14 05:43:20.880018 kernel: Key type fscrypt-provisioning registered Jan 14 05:43:20.880026 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 05:43:20.880033 kernel: ima: Allocated hash algorithm: sha1 Jan 14 05:43:20.880041 kernel: ima: No architecture policies found Jan 14 05:43:20.880048 kernel: clk: Disabling unused clocks Jan 14 05:43:20.880056 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 05:43:20.880063 kernel: Write protecting the kernel read-only data: 47104k Jan 14 05:43:20.880073 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 05:43:20.880080 kernel: Run /init as init process Jan 14 05:43:20.880088 kernel: with arguments: Jan 14 05:43:20.880096 kernel: /init Jan 14 05:43:20.880103 kernel: with environment: Jan 14 05:43:20.880110 kernel: HOME=/ Jan 14 05:43:20.880118 kernel: TERM=linux Jan 14 05:43:20.880127 kernel: SCSI subsystem initialized Jan 14 05:43:20.880134 kernel: libata version 3.00 loaded. Jan 14 05:43:20.880311 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 05:43:20.880332 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 05:43:20.880665 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 05:43:20.880917 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 05:43:20.881087 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 05:43:20.881346 kernel: scsi host0: ahci Jan 14 05:43:20.881675 kernel: scsi host1: ahci Jan 14 05:43:20.881957 kernel: scsi host2: ahci Jan 14 05:43:20.882173 kernel: scsi host3: ahci Jan 14 05:43:20.882385 kernel: scsi host4: ahci Jan 14 05:43:20.882857 kernel: scsi host5: ahci Jan 14 05:43:20.882874 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 14 05:43:20.882883 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 14 05:43:20.882891 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 14 05:43:20.882899 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 14 05:43:20.882907 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 14 05:43:20.882919 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 14 05:43:20.882927 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 05:43:20.882935 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 05:43:20.882943 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 05:43:20.882951 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 05:43:20.882959 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 05:43:20.882966 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 05:43:20.882976 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 05:43:20.882984 kernel: ata3.00: applying bridge limits Jan 14 05:43:20.882992 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 05:43:20.882999 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 05:43:20.883007 kernel: ata3.00: configured for UDMA/100 Jan 14 05:43:20.883218 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 05:43:20.883405 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 05:43:20.883703 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 05:43:20.883716 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 05:43:20.883975 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 05:43:20.883988 kernel: GPT:16515071 != 27000831 Jan 14 05:43:20.883996 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 05:43:20.884004 kernel: GPT:16515071 != 27000831 Jan 14 05:43:20.884015 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 05:43:20.884023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 05:43:20.884030 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 05:43:20.884213 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 05:43:20.884224 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 05:43:20.884232 kernel: device-mapper: uevent: version 1.0.3 Jan 14 05:43:20.884240 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 05:43:20.884250 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 05:43:20.884258 kernel: raid6: avx2x4 gen() 24587 MB/s Jan 14 05:43:20.884266 kernel: raid6: avx2x2 gen() 30429 MB/s Jan 14 05:43:20.884274 kernel: raid6: avx2x1 gen() 23619 MB/s Jan 14 05:43:20.884283 kernel: raid6: using algorithm avx2x2 gen() 30429 MB/s Jan 14 05:43:20.884291 kernel: raid6: .... xor() 17030 MB/s, rmw enabled Jan 14 05:43:20.884299 kernel: raid6: using avx2x2 recovery algorithm Jan 14 05:43:20.884309 kernel: xor: automatically using best checksumming function avx Jan 14 05:43:20.884318 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 05:43:20.884326 kernel: BTRFS: device fsid 2c8f2baf-3f08-4641-b860-b6dd41142f72 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 14 05:43:20.884334 kernel: BTRFS info (device dm-0): first mount of filesystem 2c8f2baf-3f08-4641-b860-b6dd41142f72 Jan 14 05:43:20.884344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 05:43:20.884351 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 05:43:20.884359 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 05:43:20.884367 kernel: loop: module loaded Jan 14 05:43:20.884374 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 05:43:20.884382 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 05:43:20.884391 systemd[1]: Successfully made /usr/ read-only. Jan 14 05:43:20.884403 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 05:43:20.884412 systemd[1]: Detected virtualization kvm. Jan 14 05:43:20.884420 systemd[1]: Detected architecture x86-64. Jan 14 05:43:20.884428 systemd[1]: Running in initrd. Jan 14 05:43:20.884436 systemd[1]: No hostname configured, using default hostname. Jan 14 05:43:20.884444 systemd[1]: Hostname set to . Jan 14 05:43:20.884454 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 05:43:20.884462 systemd[1]: Queued start job for default target initrd.target. Jan 14 05:43:20.884470 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 05:43:20.884478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 05:43:20.884486 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 05:43:20.884495 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 05:43:20.884503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 05:43:20.884629 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 05:43:20.884638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 05:43:20.884647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 05:43:20.884655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 05:43:20.884663 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 05:43:20.884674 systemd[1]: Reached target paths.target - Path Units. Jan 14 05:43:20.884682 systemd[1]: Reached target slices.target - Slice Units. Jan 14 05:43:20.884690 systemd[1]: Reached target swap.target - Swaps. Jan 14 05:43:20.884698 systemd[1]: Reached target timers.target - Timer Units. Jan 14 05:43:20.884706 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 05:43:20.884714 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 05:43:20.884723 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 05:43:20.884733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 05:43:20.884741 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 05:43:20.884749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 05:43:20.884757 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 05:43:20.884827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 05:43:20.884837 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 05:43:20.884845 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 05:43:20.884856 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 05:43:20.884864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 05:43:20.884872 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 05:43:20.884880 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 05:43:20.884889 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 05:43:20.884896 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 05:43:20.884906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 05:43:20.884915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 05:43:20.884949 systemd-journald[319]: Collecting audit messages is enabled. Jan 14 05:43:20.884971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 05:43:20.884979 systemd-journald[319]: Journal started Jan 14 05:43:20.885002 systemd-journald[319]: Runtime Journal (/run/log/journal/b4189faa4da54fe19a660ab523490fe4) is 6M, max 48.2M, 42.1M free. Jan 14 05:43:20.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.904680 kernel: audit: type=1130 audit(1768369400.887:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.904728 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 05:43:20.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.919478 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 05:43:20.963983 kernel: audit: type=1130 audit(1768369400.917:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.964022 kernel: audit: type=1130 audit(1768369400.941:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.942144 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 05:43:20.984037 kernel: audit: type=1130 audit(1768369400.977:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:20.985333 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 05:43:21.489014 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 05:43:21.489058 kernel: Bridge firewalling registered Jan 14 05:43:21.023927 systemd-modules-load[322]: Inserted module 'br_netfilter' Jan 14 05:43:21.483883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 05:43:21.521702 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 05:43:21.553920 kernel: audit: type=1130 audit(1768369401.521:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.554664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 05:43:21.585707 kernel: audit: type=1130 audit(1768369401.560:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.581339 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 05:43:21.593312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 05:43:21.639686 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 05:43:21.682000 kernel: audit: type=1130 audit(1768369401.647:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.640759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 05:43:21.694360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 05:43:21.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.742726 kernel: audit: type=1130 audit(1768369401.720:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.753717 kernel: audit: type=1130 audit(1768369401.752:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.742504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 05:43:21.770409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 05:43:21.816199 kernel: audit: type=1130 audit(1768369401.787:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:21.812682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 05:43:21.847000 audit: BPF prog-id=6 op=LOAD Jan 14 05:43:21.849695 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 05:43:21.860649 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 05:43:21.894751 dracut-cmdline[351]: dracut-109 Jan 14 05:43:21.894751 dracut-cmdline[351]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 05:43:21.956947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 05:43:21.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.010494 systemd-resolved[352]: Positive Trust Anchors: Jan 14 05:43:22.010698 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 05:43:22.010703 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 05:43:22.010730 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 05:43:22.095106 systemd-resolved[352]: Defaulting to hostname 'linux'. Jan 14 05:43:22.100480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 05:43:22.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.109018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 05:43:22.218692 kernel: Loading iSCSI transport class v2.0-870. Jan 14 05:43:22.246655 kernel: iscsi: registered transport (tcp) Jan 14 05:43:22.283778 kernel: iscsi: registered transport (qla4xxx) Jan 14 05:43:22.283962 kernel: QLogic iSCSI HBA Driver Jan 14 05:43:22.341724 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 05:43:22.381950 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 05:43:22.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.405964 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 05:43:22.492906 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 05:43:22.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.501867 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 05:43:22.532896 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 05:43:22.602674 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 05:43:22.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.621000 audit: BPF prog-id=7 op=LOAD Jan 14 05:43:22.622000 audit: BPF prog-id=8 op=LOAD Jan 14 05:43:22.624148 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 05:43:22.702131 systemd-udevd[582]: Using default interface naming scheme 'v257'. Jan 14 05:43:22.724031 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 05:43:22.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.732038 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 05:43:22.800235 dracut-pre-trigger[629]: rd.md=0: removing MD RAID activation Jan 14 05:43:22.883292 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 05:43:22.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.892272 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 05:43:22.954911 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 05:43:22.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:22.962000 audit: BPF prog-id=9 op=LOAD Jan 14 05:43:22.964221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 05:43:23.054388 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 05:43:23.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:23.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:23.060174 systemd-networkd[725]: lo: Link UP Jan 14 05:43:23.060179 systemd-networkd[725]: lo: Gained carrier Jan 14 05:43:23.064411 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 05:43:23.080282 systemd[1]: Reached target network.target - Network. Jan 14 05:43:23.096755 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 05:43:23.183955 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 05:43:23.232766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 05:43:23.268205 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 05:43:23.304014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 05:43:23.323984 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 05:43:23.325977 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 05:43:23.351668 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 05:43:23.362389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 05:43:23.366954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 05:43:23.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:23.378618 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 05:43:23.381746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 05:43:23.413082 disk-uuid[768]: Primary Header is updated. Jan 14 05:43:23.413082 disk-uuid[768]: Secondary Entries is updated. Jan 14 05:43:23.413082 disk-uuid[768]: Secondary Header is updated. Jan 14 05:43:23.447764 kernel: AES CTR mode by8 optimization enabled Jan 14 05:43:23.485375 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 05:43:23.485451 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 05:43:23.487438 systemd-networkd[725]: eth0: Link UP Jan 14 05:43:24.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:23.489174 systemd-networkd[725]: eth0: Gained carrier Jan 14 05:43:23.489188 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 05:43:23.503990 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 05:43:23.617013 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 05:43:24.078991 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 05:43:24.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:24.100472 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 05:43:24.122356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 05:43:24.128390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 05:43:24.156963 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 05:43:24.216242 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 05:43:24.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:24.474061 disk-uuid[770]: Warning: The kernel is still using the old partition table. Jan 14 05:43:24.474061 disk-uuid[770]: The new table will be used at the next reboot or after you Jan 14 05:43:24.474061 disk-uuid[770]: run partprobe(8) or kpartx(8) Jan 14 05:43:24.474061 disk-uuid[770]: The operation has completed successfully. Jan 14 05:43:24.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:24.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:24.491042 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 05:43:24.491282 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 05:43:24.508150 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 05:43:24.600755 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 14 05:43:24.617717 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 05:43:24.617750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 05:43:24.643770 kernel: BTRFS info (device vda6): turning on async discard Jan 14 05:43:24.643798 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 05:43:24.665724 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 05:43:24.671262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 05:43:24.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:24.678208 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 05:43:24.819948 systemd-networkd[725]: eth0: Gained IPv6LL Jan 14 05:43:24.887486 ignition[879]: Ignition 2.24.0 Jan 14 05:43:24.887736 ignition[879]: Stage: fetch-offline Jan 14 05:43:24.887786 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 05:43:24.887800 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 05:43:24.887954 ignition[879]: parsed url from cmdline: "" Jan 14 05:43:24.887959 ignition[879]: no config URL provided Jan 14 05:43:24.887965 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 05:43:24.887977 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 05:43:24.888019 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 14 05:43:24.888023 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 05:43:24.913368 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 14 05:43:25.569419 ignition[879]: parsing config with SHA512: 693eace77beffcbe6deb7a43097ee533021f539a43841dc38c5e92f6088ace0de62b4d9c11df60e658d22b16d0fa21abba1798f2fe62c93f7ff8accc7b432888 Jan 14 05:43:25.586041 unknown[879]: fetched base config from "system" Jan 14 05:43:25.586108 unknown[879]: fetched user config from "qemu" Jan 14 05:43:25.586905 ignition[879]: fetch-offline: fetch-offline passed Jan 14 05:43:25.586979 ignition[879]: Ignition finished successfully Jan 14 05:43:25.612034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 05:43:25.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:25.620935 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 05:43:25.622428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 05:43:25.742778 ignition[889]: Ignition 2.24.0 Jan 14 05:43:25.742909 ignition[889]: Stage: kargs Jan 14 05:43:25.743062 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 14 05:43:25.743072 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 05:43:25.743996 ignition[889]: kargs: kargs passed Jan 14 05:43:25.744044 ignition[889]: Ignition finished successfully Jan 14 05:43:25.775327 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 05:43:25.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:25.785258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 05:43:25.866733 ignition[897]: Ignition 2.24.0 Jan 14 05:43:25.866805 ignition[897]: Stage: disks Jan 14 05:43:25.867054 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 14 05:43:25.867067 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 05:43:25.889005 ignition[897]: disks: disks passed Jan 14 05:43:25.889256 ignition[897]: Ignition finished successfully Jan 14 05:43:25.901441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 05:43:25.935982 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 05:43:25.936021 kernel: audit: type=1130 audit(1768369405.908:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:25.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:25.909416 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 05:43:25.943071 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 05:43:25.954033 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 05:43:25.965088 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 05:43:25.977386 systemd[1]: Reached target basic.target - Basic System. Jan 14 05:43:26.010359 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 05:43:26.070351 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 05:43:26.079312 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 05:43:26.108967 kernel: audit: type=1130 audit(1768369406.084:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:26.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:26.086319 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 05:43:26.396673 kernel: EXT4-fs (vda9): mounted filesystem 06cc0495-6f26-4e6e-84ba-33c1e3a1737c r/w with ordered data mode. Quota mode: none. Jan 14 05:43:26.398180 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 05:43:26.405106 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 05:43:26.419108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 05:43:26.458993 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 05:43:26.491117 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 14 05:43:26.461116 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 05:43:26.461156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 05:43:26.461185 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 05:43:26.538918 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 05:43:26.538993 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 05:43:26.558017 kernel: BTRFS info (device vda6): turning on async discard Jan 14 05:43:26.558063 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 05:43:26.560719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 05:43:26.576914 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 05:43:26.580485 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 05:43:27.028745 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 05:43:27.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:27.049470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 05:43:27.075958 kernel: audit: type=1130 audit(1768369407.037:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:27.082132 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 05:43:27.100094 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 05:43:27.114712 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 05:43:27.180734 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 05:43:27.212903 kernel: audit: type=1130 audit(1768369407.188:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:27.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:27.220913 ignition[1012]: INFO : Ignition 2.24.0 Jan 14 05:43:27.220913 ignition[1012]: INFO : Stage: mount Jan 14 05:43:27.230743 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 05:43:27.230743 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 05:43:27.230743 ignition[1012]: INFO : mount: mount passed Jan 14 05:43:27.230743 ignition[1012]: INFO : Ignition finished successfully Jan 14 05:43:27.260224 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 05:43:27.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:27.275985 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 05:43:27.301718 kernel: audit: type=1130 audit(1768369407.272:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:27.400161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 05:43:27.466918 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Jan 14 05:43:27.486362 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 05:43:27.486433 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 05:43:27.510096 kernel: BTRFS info (device vda6): turning on async discard Jan 14 05:43:27.510154 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 05:43:27.513950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 05:43:27.601724 ignition[1042]: INFO : Ignition 2.24.0 Jan 14 05:43:27.601724 ignition[1042]: INFO : Stage: files Jan 14 05:43:27.615934 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 05:43:27.615934 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 05:43:27.615934 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Jan 14 05:43:27.615934 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 05:43:27.615934 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 05:43:27.663393 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 05:43:27.663393 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 05:43:27.663393 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 05:43:27.654171 unknown[1042]: wrote ssh authorized keys file for user: core Jan 14 05:43:27.700688 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 05:43:27.700688 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 05:43:27.879958 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 05:43:28.047461 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 05:43:28.047461 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 05:43:28.076390 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 05:43:28.179140 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 05:43:28.193030 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 05:43:28.193030 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 05:43:28.236498 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 05:43:28.236498 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 05:43:28.269948 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 14 05:43:28.570168 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 05:43:29.008817 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 05:43:29.008817 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 05:43:29.035179 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 05:43:29.124789 ignition[1042]: INFO : files: files passed Jan 14 05:43:29.124789 ignition[1042]: INFO : Ignition finished successfully Jan 14 05:43:29.252465 kernel: audit: type=1130 audit(1768369409.130:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.121067 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 05:43:29.134958 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 05:43:29.159205 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 05:43:29.291153 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 05:43:29.291400 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 05:43:29.345969 kernel: audit: type=1130 audit(1768369409.305:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.345998 kernel: audit: type=1131 audit(1768369409.305:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.346078 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 05:43:29.362334 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 05:43:29.362334 initrd-setup-root-after-ignition[1076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 05:43:29.383306 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 05:43:29.398310 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 05:43:29.437402 kernel: audit: type=1130 audit(1768369409.408:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.437662 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 05:43:29.454211 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 05:43:29.568693 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 05:43:29.569080 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 05:43:29.606812 kernel: audit: type=1130 audit(1768369409.580:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.581102 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 05:43:29.614470 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 05:43:29.626827 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 05:43:29.628368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 05:43:29.681131 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 05:43:29.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.695264 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 05:43:29.757098 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 05:43:29.757772 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 05:43:29.768769 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 05:43:29.780266 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 05:43:29.792812 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 05:43:29.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.793183 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 05:43:29.806046 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 05:43:29.814020 systemd[1]: Stopped target basic.target - Basic System. Jan 14 05:43:29.830034 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 05:43:29.847037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 05:43:29.866056 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 05:43:29.883049 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 05:43:29.914164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 05:43:29.924680 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 05:43:29.941054 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 05:43:29.956846 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 05:43:29.972425 systemd[1]: Stopped target swap.target - Swaps. Jan 14 05:43:30.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:29.991166 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 05:43:29.991378 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 05:43:30.013854 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 05:43:30.025731 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 05:43:30.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.038352 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 05:43:30.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.038817 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 05:43:30.049320 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 05:43:30.049761 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 05:43:30.063942 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 05:43:30.064174 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 05:43:30.073930 systemd[1]: Stopped target paths.target - Path Units. Jan 14 05:43:30.076797 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 05:43:30.077722 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 05:43:30.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.107716 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 05:43:30.122362 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 05:43:30.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.135048 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 05:43:30.135205 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 05:43:30.149153 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 05:43:30.149298 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 05:43:30.161008 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 05:43:30.161079 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 05:43:30.172192 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 05:43:30.172306 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 05:43:30.184200 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 05:43:30.288465 ignition[1100]: INFO : Ignition 2.24.0 Jan 14 05:43:30.288465 ignition[1100]: INFO : Stage: umount Jan 14 05:43:30.288465 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 05:43:30.288465 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 05:43:30.184302 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 05:43:30.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.206225 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 05:43:30.259115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 05:43:30.364715 ignition[1100]: INFO : umount: umount passed Jan 14 05:43:30.364715 ignition[1100]: INFO : Ignition finished successfully Jan 14 05:43:30.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.277751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 05:43:30.278221 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 05:43:30.288124 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 05:43:30.288347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 05:43:30.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.307295 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 05:43:30.307756 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 05:43:30.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.330077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 05:43:30.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.362992 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 05:43:30.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.399963 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 05:43:30.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.406858 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 05:43:30.407127 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 05:43:30.420272 systemd[1]: Stopped target network.target - Network. Jan 14 05:43:30.435854 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 05:43:30.436025 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 05:43:30.447762 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 05:43:30.447822 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 05:43:30.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.462504 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 05:43:30.462683 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 05:43:30.477245 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 05:43:30.598000 audit: BPF prog-id=6 op=UNLOAD Jan 14 05:43:30.477299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 05:43:30.483951 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 05:43:30.513098 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 05:43:30.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.564276 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 05:43:30.564813 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 05:43:30.619280 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 05:43:30.619670 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 05:43:30.668293 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 05:43:30.683157 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 05:43:30.683270 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 05:43:30.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.704275 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 05:43:30.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.722819 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 05:43:30.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.723013 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 05:43:30.738856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 05:43:30.799000 audit: BPF prog-id=9 op=UNLOAD Jan 14 05:43:30.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.739010 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 05:43:30.761075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 05:43:30.761160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 05:43:30.773479 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 05:43:30.788820 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 05:43:30.800169 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 05:43:30.807245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 05:43:30.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.807367 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 05:43:30.861173 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 05:43:30.861423 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 05:43:30.953061 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 05:43:30.953100 kernel: audit: type=1131 audit(1768369410.924:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:30.877222 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 05:43:30.877294 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 05:43:30.893467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 05:43:30.893671 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 05:43:30.909721 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 05:43:30.909824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 05:43:30.956088 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 05:43:30.956195 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 05:43:31.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.023968 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 05:43:31.076751 kernel: audit: type=1131 audit(1768369411.022:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.076791 kernel: audit: type=1131 audit(1768369411.047:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.024120 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 05:43:31.091006 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 05:43:31.107705 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 05:43:31.107973 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 05:43:31.148859 kernel: audit: type=1131 audit(1768369411.120:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.121290 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 05:43:31.181862 kernel: audit: type=1131 audit(1768369411.154:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.121371 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 05:43:31.218066 kernel: audit: type=1131 audit(1768369411.187:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.155023 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 05:43:31.250334 kernel: audit: type=1131 audit(1768369411.224:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.155134 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 05:43:31.301104 kernel: audit: type=1131 audit(1768369411.256:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.188417 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 05:43:31.188652 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 05:43:31.225271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 05:43:31.348710 kernel: audit: type=1131 audit(1768369411.320:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.225386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 05:43:31.381650 kernel: audit: type=1130 audit(1768369411.357:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:31.258324 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 05:43:31.311949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 05:43:31.321690 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 05:43:31.321837 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 05:43:31.375865 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 05:43:31.390096 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 05:43:31.409772 systemd[1]: Switching root. Jan 14 05:43:31.480006 systemd-journald[319]: Journal stopped Jan 14 05:43:34.287903 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 14 05:43:34.288093 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 05:43:34.288116 kernel: SELinux: policy capability open_perms=1 Jan 14 05:43:34.288135 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 05:43:34.288153 kernel: SELinux: policy capability always_check_network=0 Jan 14 05:43:34.288170 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 05:43:34.288195 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 05:43:34.288222 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 05:43:34.288245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 05:43:34.288272 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 05:43:34.288290 systemd[1]: Successfully loaded SELinux policy in 112.834ms. Jan 14 05:43:34.288315 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.904ms. Jan 14 05:43:34.288332 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 05:43:34.288353 systemd[1]: Detected virtualization kvm. Jan 14 05:43:34.288374 systemd[1]: Detected architecture x86-64. Jan 14 05:43:34.288396 systemd[1]: Detected first boot. Jan 14 05:43:34.288412 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 05:43:34.288428 zram_generator::config[1146]: No configuration found. Jan 14 05:43:34.288449 kernel: Guest personality initialized and is inactive Jan 14 05:43:34.288466 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 05:43:34.288486 kernel: Initialized host personality Jan 14 05:43:34.288504 kernel: NET: Registered PF_VSOCK protocol family Jan 14 05:43:34.288664 systemd[1]: Populated /etc with preset unit settings. Jan 14 05:43:34.288682 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 05:43:34.288698 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 05:43:34.288715 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 05:43:34.288742 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 05:43:34.288762 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 05:43:34.288779 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 05:43:34.288795 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 05:43:34.288814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 05:43:34.288832 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 05:43:34.288848 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 05:43:34.288864 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 05:43:34.288884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 05:43:34.288902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 05:43:34.289001 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 05:43:34.289020 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 05:43:34.289039 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 05:43:34.289055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 05:43:34.289071 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 05:43:34.289092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 05:43:34.289112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 05:43:34.289131 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 05:43:34.289155 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 05:43:34.289173 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 05:43:34.289191 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 05:43:34.289216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 05:43:34.289234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 05:43:34.289250 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 05:43:34.289266 systemd[1]: Reached target slices.target - Slice Units. Jan 14 05:43:34.289282 systemd[1]: Reached target swap.target - Swaps. Jan 14 05:43:34.289300 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 05:43:34.289319 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 05:43:34.289344 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 05:43:34.289361 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 05:43:34.289377 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 05:43:34.289392 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 05:43:34.289408 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 05:43:34.289428 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 05:43:34.289444 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 05:43:34.289460 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 05:43:34.289481 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 05:43:34.289499 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 05:43:34.289655 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 05:43:34.289675 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 05:43:34.289692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 05:43:34.289708 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 05:43:34.289729 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 05:43:34.289748 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 05:43:34.289768 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 05:43:34.289787 systemd[1]: Reached target machines.target - Containers. Jan 14 05:43:34.289806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 05:43:34.289824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 05:43:34.289844 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 05:43:34.289867 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 05:43:34.289885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 05:43:34.289989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 05:43:34.290012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 05:43:34.290030 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 05:43:34.290048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 05:43:34.290067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 05:43:34.290091 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 05:43:34.290110 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 05:43:34.290131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 05:43:34.290150 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 05:43:34.290170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 05:43:34.290189 kernel: ACPI: bus type drm_connector registered Jan 14 05:43:34.290206 kernel: fuse: init (API version 7.41) Jan 14 05:43:34.290228 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 05:43:34.290247 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 05:43:34.290269 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 05:43:34.290286 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 05:43:34.290306 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 05:43:34.290323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 05:43:34.290339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 05:43:34.290387 systemd-journald[1232]: Collecting audit messages is enabled. Jan 14 05:43:34.290417 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 05:43:34.290438 systemd-journald[1232]: Journal started Jan 14 05:43:34.290468 systemd-journald[1232]: Runtime Journal (/run/log/journal/b4189faa4da54fe19a660ab523490fe4) is 6M, max 48.2M, 42.1M free. Jan 14 05:43:33.353000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 05:43:34.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.072000 audit: BPF prog-id=14 op=UNLOAD Jan 14 05:43:34.072000 audit: BPF prog-id=13 op=UNLOAD Jan 14 05:43:34.098000 audit: BPF prog-id=15 op=LOAD Jan 14 05:43:34.099000 audit: BPF prog-id=16 op=LOAD Jan 14 05:43:34.100000 audit: BPF prog-id=17 op=LOAD Jan 14 05:43:34.284000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 05:43:34.284000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffde2fd5ee0 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:34.284000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 05:43:32.811113 systemd[1]: Queued start job for default target multi-user.target. Jan 14 05:43:32.830448 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 05:43:32.832037 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 05:43:32.832713 systemd[1]: systemd-journald.service: Consumed 3.805s CPU time. Jan 14 05:43:34.321354 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 05:43:34.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.330331 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 05:43:34.340793 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 05:43:34.349879 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 05:43:34.360105 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 05:43:34.370133 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 05:43:34.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.379331 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 05:43:34.391158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 05:43:34.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.403362 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 05:43:34.404378 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 05:43:34.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.421813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 05:43:34.422355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 05:43:34.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.432459 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 05:43:34.433272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 05:43:34.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.443353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 05:43:34.444235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 05:43:34.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.454403 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 05:43:34.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.455025 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 05:43:34.464018 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 05:43:34.464427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 05:43:34.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.475090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 05:43:34.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.486283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 05:43:34.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.499780 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 05:43:34.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.522474 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 05:43:34.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.536367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 05:43:34.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:34.567452 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 05:43:34.579390 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 05:43:34.593290 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 05:43:34.605149 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 05:43:34.622376 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 05:43:34.622657 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 05:43:34.633179 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 05:43:34.642790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 05:43:34.643083 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 05:43:34.645860 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 05:43:34.656412 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 05:43:34.665067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 05:43:34.666888 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 05:43:34.675399 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 05:43:34.680048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 05:43:34.695148 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 05:43:34.720301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 05:43:34.736658 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 05:43:34.746035 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 05:43:34.754281 systemd-journald[1232]: Time spent on flushing to /var/log/journal/b4189faa4da54fe19a660ab523490fe4 is 22.310ms for 1114 entries. Jan 14 05:43:34.754281 systemd-journald[1232]: System Journal (/var/log/journal/b4189faa4da54fe19a660ab523490fe4) is 8M, max 163.5M, 155.5M free. Jan 14 05:43:34.801270 systemd-journald[1232]: Received client request to flush runtime journal. Jan 14 05:43:35.363781 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1058533095 wd_nsec: 1058532513 Jan 14 05:43:35.581260 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 05:43:35.630885 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 05:43:35.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.661822 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 05:43:35.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.686255 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 14 05:43:35.686355 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 14 05:43:35.693077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 05:43:35.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.740666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 05:43:35.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.757801 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 05:43:35.762746 kernel: loop2: detected capacity change from 0 to 111560 Jan 14 05:43:35.775420 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 05:43:35.788029 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 05:43:35.832699 kernel: loop3: detected capacity change from 0 to 219144 Jan 14 05:43:35.857722 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 05:43:35.861084 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 05:43:35.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.922004 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 05:43:35.944716 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 05:43:35.968740 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 14 05:43:35.968911 kernel: audit: type=1130 audit(1768369415.956:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:35.963368 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 05:43:35.960000 audit: BPF prog-id=18 op=LOAD Jan 14 05:43:36.001179 kernel: audit: type=1334 audit(1768369415.960:135): prog-id=18 op=LOAD Jan 14 05:43:36.001253 kernel: audit: type=1334 audit(1768369415.960:136): prog-id=19 op=LOAD Jan 14 05:43:36.002012 kernel: audit: type=1334 audit(1768369415.961:137): prog-id=20 op=LOAD Jan 14 05:43:35.960000 audit: BPF prog-id=19 op=LOAD Jan 14 05:43:35.961000 audit: BPF prog-id=20 op=LOAD Jan 14 05:43:36.056854 kernel: audit: type=1334 audit(1768369416.045:138): prog-id=21 op=LOAD Jan 14 05:43:36.045000 audit: BPF prog-id=21 op=LOAD Jan 14 05:43:36.058021 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 05:43:36.150709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 05:43:36.214010 kernel: audit: type=1334 audit(1768369416.184:139): prog-id=22 op=LOAD Jan 14 05:43:36.229504 kernel: audit: type=1334 audit(1768369416.199:140): prog-id=23 op=LOAD Jan 14 05:43:36.243773 kernel: audit: type=1334 audit(1768369416.199:141): prog-id=24 op=LOAD Jan 14 05:43:36.184000 audit: BPF prog-id=22 op=LOAD Jan 14 05:43:36.199000 audit: BPF prog-id=23 op=LOAD Jan 14 05:43:36.199000 audit: BPF prog-id=24 op=LOAD Jan 14 05:43:36.558145 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 05:43:36.573704 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 05:43:36.600029 kernel: audit: type=1334 audit(1768369416.583:142): prog-id=25 op=LOAD Jan 14 05:43:36.583000 audit: BPF prog-id=25 op=LOAD Jan 14 05:43:36.592188 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 05:43:36.583000 audit: BPF prog-id=26 op=LOAD Jan 14 05:43:36.629796 kernel: audit: type=1334 audit(1768369416.583:143): prog-id=26 op=LOAD Jan 14 05:43:36.583000 audit: BPF prog-id=27 op=LOAD Jan 14 05:43:36.638618 kernel: loop6: detected capacity change from 0 to 219144 Jan 14 05:43:36.662774 (sd-merge)[1288]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 05:43:36.686287 (sd-merge)[1288]: Merged extensions into '/usr'. Jan 14 05:43:36.694750 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 05:43:36.694849 systemd[1]: Reloading... Jan 14 05:43:36.694865 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 14 05:43:36.694878 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Jan 14 05:43:36.760316 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 05:43:36.964721 zram_generator::config[1342]: No configuration found. Jan 14 05:43:37.073188 systemd-resolved[1291]: Positive Trust Anchors: Jan 14 05:43:37.073209 systemd-resolved[1291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 05:43:37.073216 systemd-resolved[1291]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 05:43:37.073259 systemd-resolved[1291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 05:43:37.081188 systemd-oomd[1290]: No swap; memory pressure usage will be degraded Jan 14 05:43:37.083332 systemd-resolved[1291]: Defaulting to hostname 'linux'. Jan 14 05:43:37.265413 systemd[1]: Reloading finished in 569 ms. Jan 14 05:43:37.313778 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 05:43:37.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:37.324012 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 05:43:37.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:37.332847 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 05:43:37.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:37.342502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 05:43:37.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:37.352750 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 05:43:37.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:37.371143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 05:43:37.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:37.412786 kernel: hrtimer: interrupt took 3470729 ns Jan 14 05:43:37.515290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 05:43:37.558198 systemd[1]: Starting ensure-sysext.service... Jan 14 05:43:37.592161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 05:43:37.636000 audit: BPF prog-id=28 op=LOAD Jan 14 05:43:37.636000 audit: BPF prog-id=21 op=UNLOAD Jan 14 05:43:37.650000 audit: BPF prog-id=29 op=LOAD Jan 14 05:43:37.651000 audit: BPF prog-id=18 op=UNLOAD Jan 14 05:43:37.652000 audit: BPF prog-id=30 op=LOAD Jan 14 05:43:37.652000 audit: BPF prog-id=31 op=LOAD Jan 14 05:43:37.653000 audit: BPF prog-id=19 op=UNLOAD Jan 14 05:43:37.653000 audit: BPF prog-id=20 op=UNLOAD Jan 14 05:43:37.661000 audit: BPF prog-id=32 op=LOAD Jan 14 05:43:37.661000 audit: BPF prog-id=15 op=UNLOAD Jan 14 05:43:37.663000 audit: BPF prog-id=33 op=LOAD Jan 14 05:43:37.663000 audit: BPF prog-id=34 op=LOAD Jan 14 05:43:37.663000 audit: BPF prog-id=16 op=UNLOAD Jan 14 05:43:37.664000 audit: BPF prog-id=17 op=UNLOAD Jan 14 05:43:37.680000 audit: BPF prog-id=35 op=LOAD Jan 14 05:43:37.681000 audit: BPF prog-id=25 op=UNLOAD Jan 14 05:43:37.681000 audit: BPF prog-id=36 op=LOAD Jan 14 05:43:37.681000 audit: BPF prog-id=37 op=LOAD Jan 14 05:43:37.681000 audit: BPF prog-id=26 op=UNLOAD Jan 14 05:43:37.681000 audit: BPF prog-id=27 op=UNLOAD Jan 14 05:43:37.683000 audit: BPF prog-id=38 op=LOAD Jan 14 05:43:37.683000 audit: BPF prog-id=22 op=UNLOAD Jan 14 05:43:37.683000 audit: BPF prog-id=39 op=LOAD Jan 14 05:43:37.683000 audit: BPF prog-id=40 op=LOAD Jan 14 05:43:37.683000 audit: BPF prog-id=23 op=UNLOAD Jan 14 05:43:37.683000 audit: BPF prog-id=24 op=UNLOAD Jan 14 05:43:37.739189 systemd[1]: Reload requested from client PID 1375 ('systemctl') (unit ensure-sysext.service)... Jan 14 05:43:37.739207 systemd[1]: Reloading... Jan 14 05:43:38.481662 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 05:43:38.481734 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 05:43:38.482239 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 05:43:38.486243 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 14 05:43:38.486441 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jan 14 05:43:38.497203 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 05:43:38.497294 systemd-tmpfiles[1376]: Skipping /boot Jan 14 05:43:38.550145 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 05:43:38.550237 systemd-tmpfiles[1376]: Skipping /boot Jan 14 05:43:38.596748 zram_generator::config[1408]: No configuration found. Jan 14 05:43:38.890382 systemd[1]: Reloading finished in 1150 ms. Jan 14 05:43:38.932681 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 05:43:38.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:38.950000 audit: BPF prog-id=41 op=LOAD Jan 14 05:43:38.950000 audit: BPF prog-id=32 op=UNLOAD Jan 14 05:43:38.950000 audit: BPF prog-id=42 op=LOAD Jan 14 05:43:38.950000 audit: BPF prog-id=43 op=LOAD Jan 14 05:43:38.950000 audit: BPF prog-id=33 op=UNLOAD Jan 14 05:43:38.950000 audit: BPF prog-id=34 op=UNLOAD Jan 14 05:43:38.952000 audit: BPF prog-id=44 op=LOAD Jan 14 05:43:38.952000 audit: BPF prog-id=35 op=UNLOAD Jan 14 05:43:38.952000 audit: BPF prog-id=45 op=LOAD Jan 14 05:43:38.952000 audit: BPF prog-id=46 op=LOAD Jan 14 05:43:38.952000 audit: BPF prog-id=36 op=UNLOAD Jan 14 05:43:38.952000 audit: BPF prog-id=37 op=UNLOAD Jan 14 05:43:38.954000 audit: BPF prog-id=47 op=LOAD Jan 14 05:43:38.954000 audit: BPF prog-id=29 op=UNLOAD Jan 14 05:43:38.954000 audit: BPF prog-id=48 op=LOAD Jan 14 05:43:38.954000 audit: BPF prog-id=49 op=LOAD Jan 14 05:43:38.954000 audit: BPF prog-id=30 op=UNLOAD Jan 14 05:43:38.954000 audit: BPF prog-id=31 op=UNLOAD Jan 14 05:43:38.956000 audit: BPF prog-id=50 op=LOAD Jan 14 05:43:38.956000 audit: BPF prog-id=28 op=UNLOAD Jan 14 05:43:38.959000 audit: BPF prog-id=51 op=LOAD Jan 14 05:43:38.980000 audit: BPF prog-id=38 op=UNLOAD Jan 14 05:43:38.981000 audit: BPF prog-id=52 op=LOAD Jan 14 05:43:38.982000 audit: BPF prog-id=53 op=LOAD Jan 14 05:43:38.982000 audit: BPF prog-id=39 op=UNLOAD Jan 14 05:43:38.982000 audit: BPF prog-id=40 op=UNLOAD Jan 14 05:43:39.037789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 05:43:39.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.491382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 05:43:39.524783 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 05:43:39.558882 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 05:43:39.690262 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 05:43:39.727000 audit: BPF prog-id=8 op=UNLOAD Jan 14 05:43:39.727000 audit: BPF prog-id=7 op=UNLOAD Jan 14 05:43:39.731000 audit: BPF prog-id=54 op=LOAD Jan 14 05:43:39.731000 audit: BPF prog-id=55 op=LOAD Jan 14 05:43:39.736326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 05:43:39.751088 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 05:43:39.770864 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 05:43:39.774249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 05:43:39.794844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 05:43:39.797000 audit[1458]: SYSTEM_BOOT pid=1458 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.820955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 05:43:39.860755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 05:43:39.864238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 05:43:39.864812 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 05:43:39.865130 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 05:43:39.865275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 05:43:39.879326 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 05:43:39.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.896230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 05:43:39.897918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 05:43:39.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.938961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 05:43:39.939445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 05:43:39.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.952917 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 05:43:39.953744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 05:43:39.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:39.985494 systemd-udevd[1457]: Using default interface naming scheme 'v257'. Jan 14 05:43:39.990800 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 05:43:39.999156 augenrules[1480]: No rules Jan 14 05:43:39.998000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 05:43:39.998000 audit[1480]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff53d64490 a2=420 a3=0 items=0 ppid=1447 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:39.998000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 05:43:40.004406 systemd[1]: Finished ensure-sysext.service. Jan 14 05:43:40.031414 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 05:43:40.032729 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 05:43:40.049033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 05:43:40.049290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 05:43:40.053854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 05:43:40.067144 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 05:43:40.079071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 05:43:40.096231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 05:43:40.111424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 05:43:40.112195 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 05:43:40.123953 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 05:43:40.130804 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 05:43:40.139858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 05:43:40.147212 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 05:43:40.160132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 05:43:40.161707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 05:43:40.172338 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 05:43:40.175392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 05:43:40.186331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 05:43:40.186918 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 05:43:40.198345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 05:43:40.254083 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 05:43:40.262404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 05:43:40.262644 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 05:43:40.286949 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 05:43:40.290956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 05:43:40.319843 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 05:43:40.827684 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 05:43:40.837230 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 05:43:40.940896 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 05:43:40.972900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 05:43:40.986081 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 05:43:41.015785 systemd-networkd[1505]: lo: Link UP Jan 14 05:43:41.015802 systemd-networkd[1505]: lo: Gained carrier Jan 14 05:43:41.019176 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 05:43:41.033425 systemd[1]: Reached target network.target - Network. Jan 14 05:43:41.045049 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 05:43:41.045136 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 05:43:41.046766 systemd-networkd[1505]: eth0: Link UP Jan 14 05:43:41.049195 systemd-networkd[1505]: eth0: Gained carrier Jan 14 05:43:41.049224 systemd-networkd[1505]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 05:43:41.052919 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 05:43:41.067188 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 05:43:41.080674 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 05:43:41.082778 systemd-timesyncd[1492]: Network configuration changed, trying to establish connection. Jan 14 05:43:41.633384 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 05:43:41.633426 systemd-timesyncd[1492]: Initial clock synchronization to Wed 2026-01-14 05:43:41.633139 UTC. Jan 14 05:43:41.634418 systemd-resolved[1291]: Clock change detected. Flushing caches. Jan 14 05:43:41.682428 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 05:43:41.703399 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 05:43:41.703476 kernel: ACPI: button: Power Button [PWRF] Jan 14 05:43:41.718494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 05:43:41.751999 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 05:43:41.788515 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 05:43:41.801305 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 05:43:42.266751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 05:43:42.725542 kernel: kvm_amd: TSC scaling supported Jan 14 05:43:42.725628 kernel: kvm_amd: Nested Virtualization enabled Jan 14 05:43:42.725741 kernel: kvm_amd: Nested Paging enabled Jan 14 05:43:42.731461 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 05:43:42.731744 kernel: kvm_amd: PMU virtualization is disabled Jan 14 05:43:43.469075 ldconfig[1449]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 05:43:43.483142 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 05:43:43.492975 kernel: EDAC MC: Ver: 3.0.0 Jan 14 05:43:43.500957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 05:43:43.611072 systemd-networkd[1505]: eth0: Gained IPv6LL Jan 14 05:43:43.622440 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 05:43:43.971925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 05:43:43.984403 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 05:43:43.992986 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 05:43:44.002384 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 05:43:44.009606 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 05:43:44.017613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 05:43:44.025950 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 05:43:44.033133 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 05:43:44.040121 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 05:43:44.048549 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 05:43:44.056542 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 05:43:44.063496 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 05:43:44.071549 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 05:43:44.071640 systemd[1]: Reached target paths.target - Path Units. Jan 14 05:43:44.077346 systemd[1]: Reached target timers.target - Timer Units. Jan 14 05:43:44.086872 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 05:43:44.095718 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 05:43:44.106495 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 05:43:44.114474 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 05:43:44.122540 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 05:43:44.132403 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 05:43:44.139865 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 05:43:44.150503 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 05:43:44.159958 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 05:43:44.166869 systemd[1]: Reached target basic.target - Basic System. Jan 14 05:43:44.173970 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 05:43:44.174077 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 05:43:44.176762 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 05:43:44.186333 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 05:43:44.205003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 05:43:44.213364 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 05:43:44.221874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 05:43:44.230499 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 05:43:44.235847 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 05:43:44.239286 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 05:43:44.246407 jq[1569]: false Jan 14 05:43:44.249294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:43:44.260400 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 05:43:44.267525 extend-filesystems[1570]: Found /dev/vda6 Jan 14 05:43:44.273058 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 14 05:43:44.269997 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 14 05:43:44.273351 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 05:43:44.280734 extend-filesystems[1570]: Found /dev/vda9 Jan 14 05:43:44.289756 extend-filesystems[1570]: Checking size of /dev/vda9 Jan 14 05:43:44.296112 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 14 05:43:44.296112 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 05:43:44.296094 oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 14 05:43:44.296352 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 14 05:43:44.296114 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 05:43:44.296154 oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 14 05:43:44.298564 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 05:43:44.310777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 05:43:44.318586 extend-filesystems[1570]: Resized partition /dev/vda9 Jan 14 05:43:44.325716 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 14 05:43:44.325716 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 05:43:44.318625 oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 14 05:43:44.318637 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 05:43:44.326381 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 05:43:44.333343 extend-filesystems[1591]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 05:43:44.347460 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 05:43:44.360819 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 05:43:44.367642 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 05:43:44.369093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 05:43:44.370571 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 05:43:44.379122 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 05:43:44.401906 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 05:43:44.411840 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 05:43:44.412810 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 05:43:44.413596 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 05:43:44.414955 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 05:43:44.426010 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 05:43:44.427480 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 05:43:44.442370 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 05:43:44.506038 jq[1601]: true Jan 14 05:43:44.445748 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 05:43:44.471821 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 05:43:44.538844 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 05:43:44.564439 update_engine[1600]: I20260114 05:43:44.563956 1600 main.cc:92] Flatcar Update Engine starting Jan 14 05:43:44.565384 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 05:43:44.574843 jq[1630]: true Jan 14 05:43:44.565816 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 05:43:44.576548 extend-filesystems[1591]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 05:43:44.576548 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 05:43:44.576548 extend-filesystems[1591]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 05:43:44.627064 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 05:43:44.627315 tar[1608]: linux-amd64/LICENSE Jan 14 05:43:44.627315 tar[1608]: linux-amd64/helm Jan 14 05:43:44.583130 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 05:43:44.627736 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Jan 14 05:43:44.584766 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 05:43:44.616010 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 05:43:44.676977 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 05:43:44.677080 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 05:43:44.678960 systemd-logind[1596]: New seat seat0. Jan 14 05:43:44.687121 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 05:43:44.688965 dbus-daemon[1567]: [system] SELinux support is enabled Jan 14 05:43:44.695923 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 05:43:44.708895 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 05:43:44.708995 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 05:43:44.721353 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 05:43:44.721381 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 05:43:44.740817 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 05:43:44.753490 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 05:43:44.768650 update_engine[1600]: I20260114 05:43:44.755551 1600 update_check_scheduler.cc:74] Next update check in 9m17s Jan 14 05:43:44.937843 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 05:43:45.271978 systemd[1]: Started update-engine.service - Update Engine. Jan 14 05:43:45.412020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 05:43:45.455416 bash[1660]: Updated "/home/core/.ssh/authorized_keys" Jan 14 05:43:45.469891 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 05:43:45.478959 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 05:43:45.485767 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 05:43:45.486628 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 05:43:45.508748 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 05:43:46.048124 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 05:43:46.061872 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 05:43:46.073945 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 05:43:46.083511 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 05:43:46.138047 locksmithd[1663]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 05:43:47.770804 containerd[1620]: time="2026-01-14T05:43:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 05:43:47.772532 containerd[1620]: time="2026-01-14T05:43:47.772091587Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.899331258Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="265.545µs" Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.899442386Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.899639303Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.899654572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.899935737Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900006108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900083532Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900101767Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900917800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900935012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900945952Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 05:43:47.901540 containerd[1620]: time="2026-01-14T05:43:47.900952875Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.902081 containerd[1620]: time="2026-01-14T05:43:47.901601165Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.902081 containerd[1620]: time="2026-01-14T05:43:47.901623647Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 05:43:47.902081 containerd[1620]: time="2026-01-14T05:43:47.902039763Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.906975 containerd[1620]: time="2026-01-14T05:43:47.904011844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.906975 containerd[1620]: time="2026-01-14T05:43:47.904369523Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 05:43:47.906975 containerd[1620]: time="2026-01-14T05:43:47.904392746Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 05:43:47.906975 containerd[1620]: time="2026-01-14T05:43:47.905077183Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 05:43:47.906975 containerd[1620]: time="2026-01-14T05:43:47.906919602Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 05:43:47.907089 containerd[1620]: time="2026-01-14T05:43:47.907014890Z" level=info msg="metadata content store policy set" policy=shared Jan 14 05:43:47.921927 containerd[1620]: time="2026-01-14T05:43:47.921809044Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 05:43:47.923948 containerd[1620]: time="2026-01-14T05:43:47.923643588Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 05:43:47.927345 containerd[1620]: time="2026-01-14T05:43:47.924139012Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 05:43:47.927345 containerd[1620]: time="2026-01-14T05:43:47.927141928Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 05:43:47.927345 containerd[1620]: time="2026-01-14T05:43:47.927303700Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 05:43:47.927433 containerd[1620]: time="2026-01-14T05:43:47.927384881Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 05:43:47.927433 containerd[1620]: time="2026-01-14T05:43:47.927402975Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 05:43:47.927433 containerd[1620]: time="2026-01-14T05:43:47.927415619Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 05:43:47.927480 containerd[1620]: time="2026-01-14T05:43:47.927436437Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 05:43:47.927587 containerd[1620]: time="2026-01-14T05:43:47.927512359Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.927919008Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.927949565Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.927969883Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.927990351Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928302925Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928378546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928397501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928473814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928487038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928499191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928513007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928584580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928656816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928672294Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 05:43:47.930066 containerd[1620]: time="2026-01-14T05:43:47.928750831Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 05:43:47.930560 containerd[1620]: time="2026-01-14T05:43:47.928781448Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 05:43:47.931848 containerd[1620]: time="2026-01-14T05:43:47.931598596Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 05:43:47.931848 containerd[1620]: time="2026-01-14T05:43:47.931751642Z" level=info msg="Start snapshots syncer" Jan 14 05:43:47.932034 containerd[1620]: time="2026-01-14T05:43:47.931943400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 05:43:47.934294 containerd[1620]: time="2026-01-14T05:43:47.934028792Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 05:43:47.938605 containerd[1620]: time="2026-01-14T05:43:47.938301728Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 05:43:47.941572 containerd[1620]: time="2026-01-14T05:43:47.941327716Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 05:43:47.941632 containerd[1620]: time="2026-01-14T05:43:47.941609442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 05:43:47.941681 containerd[1620]: time="2026-01-14T05:43:47.941650318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 05:43:47.941786 containerd[1620]: time="2026-01-14T05:43:47.941673652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 05:43:47.941988 containerd[1620]: time="2026-01-14T05:43:47.941799106Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 05:43:47.941988 containerd[1620]: time="2026-01-14T05:43:47.941894273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 05:43:47.941988 containerd[1620]: time="2026-01-14T05:43:47.941932815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 05:43:47.941988 containerd[1620]: time="2026-01-14T05:43:47.941949647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 05:43:47.941988 containerd[1620]: time="2026-01-14T05:43:47.941967941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 05:43:47.942290 containerd[1620]: time="2026-01-14T05:43:47.941991866Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 05:43:47.942427 containerd[1620]: time="2026-01-14T05:43:47.942336589Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 05:43:47.942455 containerd[1620]: time="2026-01-14T05:43:47.942425525Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 05:43:47.942455 containerd[1620]: time="2026-01-14T05:43:47.942445042Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 05:43:47.942489 containerd[1620]: time="2026-01-14T05:43:47.942466742Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 05:43:47.942517 containerd[1620]: time="2026-01-14T05:43:47.942481800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 05:43:47.942517 containerd[1620]: time="2026-01-14T05:43:47.942499202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 05:43:47.942743 containerd[1620]: time="2026-01-14T05:43:47.942600562Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 05:43:47.942902 containerd[1620]: time="2026-01-14T05:43:47.942820272Z" level=info msg="runtime interface created" Jan 14 05:43:47.942902 containerd[1620]: time="2026-01-14T05:43:47.942893328Z" level=info msg="created NRI interface" Jan 14 05:43:47.942945 containerd[1620]: time="2026-01-14T05:43:47.942903968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 05:43:47.942945 containerd[1620]: time="2026-01-14T05:43:47.942919898Z" level=info msg="Connect containerd service" Jan 14 05:43:47.942985 containerd[1620]: time="2026-01-14T05:43:47.942943933Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 05:43:47.959290 containerd[1620]: time="2026-01-14T05:43:47.958895417Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 05:43:49.222956 tar[1608]: linux-amd64/README.md Jan 14 05:43:49.533956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 05:43:49.892476 containerd[1620]: time="2026-01-14T05:43:49.891516455Z" level=info msg="Start subscribing containerd event" Jan 14 05:43:49.894656 containerd[1620]: time="2026-01-14T05:43:49.893908499Z" level=info msg="Start recovering state" Jan 14 05:43:49.896997 containerd[1620]: time="2026-01-14T05:43:49.896796299Z" level=info msg="Start event monitor" Jan 14 05:43:49.897449 containerd[1620]: time="2026-01-14T05:43:49.897381131Z" level=info msg="Start cni network conf syncer for default" Jan 14 05:43:49.898572 containerd[1620]: time="2026-01-14T05:43:49.898484952Z" level=info msg="Start streaming server" Jan 14 05:43:49.899009 containerd[1620]: time="2026-01-14T05:43:49.898656321Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 05:43:49.899009 containerd[1620]: time="2026-01-14T05:43:49.898825036Z" level=info msg="runtime interface starting up..." Jan 14 05:43:49.899009 containerd[1620]: time="2026-01-14T05:43:49.898837219Z" level=info msg="starting plugins..." Jan 14 05:43:49.899009 containerd[1620]: time="2026-01-14T05:43:49.898859340Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 05:43:49.899931 containerd[1620]: time="2026-01-14T05:43:49.899799264Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 05:43:49.899931 containerd[1620]: time="2026-01-14T05:43:49.899907577Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 05:43:49.900947 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 05:43:49.901431 containerd[1620]: time="2026-01-14T05:43:49.901388225Z" level=info msg="containerd successfully booted in 2.133505s" Jan 14 05:43:52.267290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:43:52.268001 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 05:43:52.269614 systemd[1]: Startup finished in 5.956s (kernel) + 11.929s (initrd) + 20.086s (userspace) = 37.973s. Jan 14 05:43:52.311035 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 05:43:52.321339 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 05:43:52.323650 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:33888.service - OpenSSH per-connection server daemon (10.0.0.1:33888). Jan 14 05:43:52.529563 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 33888 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:52.533704 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:52.555883 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 05:43:52.557480 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 05:43:52.572330 systemd-logind[1596]: New session 1 of user core. Jan 14 05:43:52.603593 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 05:43:52.608554 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 05:43:52.635870 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:52.642833 systemd-logind[1596]: New session 2 of user core. Jan 14 05:43:52.814918 systemd[1724]: Queued start job for default target default.target. Jan 14 05:43:52.831295 systemd[1724]: Created slice app.slice - User Application Slice. Jan 14 05:43:52.831380 systemd[1724]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 05:43:52.831394 systemd[1724]: Reached target paths.target - Paths. Jan 14 05:43:52.831448 systemd[1724]: Reached target timers.target - Timers. Jan 14 05:43:52.834117 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 05:43:52.835899 systemd[1724]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 05:43:52.853974 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 05:43:52.854135 systemd[1724]: Reached target sockets.target - Sockets. Jan 14 05:43:52.857987 systemd[1724]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 05:43:52.858293 systemd[1724]: Reached target basic.target - Basic System. Jan 14 05:43:52.858403 systemd[1724]: Reached target default.target - Main User Target. Jan 14 05:43:52.858550 systemd[1724]: Startup finished in 204ms. Jan 14 05:43:52.859396 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 05:43:52.868509 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 05:43:52.899447 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:33900.service - OpenSSH per-connection server daemon (10.0.0.1:33900). Jan 14 05:43:53.030898 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 33900 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:53.034019 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:53.046936 systemd-logind[1596]: New session 3 of user core. Jan 14 05:43:53.059524 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 05:43:53.091414 sshd[1744]: Connection closed by 10.0.0.1 port 33900 Jan 14 05:43:53.092532 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 14 05:43:53.106016 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:33900.service: Deactivated successfully. Jan 14 05:43:53.109617 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 05:43:53.112122 systemd-logind[1596]: Session 3 logged out. Waiting for processes to exit. Jan 14 05:43:53.115822 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:33910.service - OpenSSH per-connection server daemon (10.0.0.1:33910). Jan 14 05:43:53.118434 systemd-logind[1596]: Removed session 3. Jan 14 05:43:53.192929 kubelet[1711]: E0114 05:43:53.192709 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 05:43:53.197117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 05:43:53.197610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 05:43:53.198704 systemd[1]: kubelet.service: Consumed 6.689s CPU time, 254.8M memory peak. Jan 14 05:43:53.212610 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 33910 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:53.216389 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:53.226707 systemd-logind[1596]: New session 4 of user core. Jan 14 05:43:53.236680 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 05:43:53.260611 sshd[1755]: Connection closed by 10.0.0.1 port 33910 Jan 14 05:43:53.261077 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 14 05:43:53.276696 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:33910.service: Deactivated successfully. Jan 14 05:43:53.279683 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 05:43:53.281861 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. Jan 14 05:43:53.286396 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:33922.service - OpenSSH per-connection server daemon (10.0.0.1:33922). Jan 14 05:43:53.287876 systemd-logind[1596]: Removed session 4. Jan 14 05:43:53.393724 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 33922 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:53.396623 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:53.405944 systemd-logind[1596]: New session 5 of user core. Jan 14 05:43:53.419591 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 05:43:53.450599 sshd[1765]: Connection closed by 10.0.0.1 port 33922 Jan 14 05:43:53.450700 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 14 05:43:53.466480 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:33922.service: Deactivated successfully. Jan 14 05:43:53.469007 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 05:43:53.470929 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. Jan 14 05:43:53.474535 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:33926.service - OpenSSH per-connection server daemon (10.0.0.1:33926). Jan 14 05:43:53.475599 systemd-logind[1596]: Removed session 5. Jan 14 05:43:53.573613 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 33926 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:53.575983 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:53.587547 systemd-logind[1596]: New session 6 of user core. Jan 14 05:43:53.597611 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 05:43:53.648134 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 05:43:53.648980 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 05:43:53.675001 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 14 05:43:53.678809 sshd[1775]: Connection closed by 10.0.0.1 port 33926 Jan 14 05:43:53.679532 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 14 05:43:53.688677 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:33926.service: Deactivated successfully. Jan 14 05:43:53.691563 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 05:43:53.694086 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. Jan 14 05:43:53.696984 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:33930.service - OpenSSH per-connection server daemon (10.0.0.1:33930). Jan 14 05:43:53.698427 systemd-logind[1596]: Removed session 6. Jan 14 05:43:53.799958 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 33930 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:53.803036 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:53.812560 systemd-logind[1596]: New session 7 of user core. Jan 14 05:43:53.822549 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 05:43:53.854727 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 05:43:53.855501 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 05:43:53.862564 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 14 05:43:53.881505 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 05:43:53.881990 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 05:43:53.899446 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 05:43:53.982000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 05:43:53.983824 augenrules[1813]: No rules Jan 14 05:43:53.985620 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 05:43:53.986108 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 05:43:53.987629 kernel: kauditd_printk_skb: 76 callbacks suppressed Jan 14 05:43:53.987686 kernel: audit: type=1305 audit(1768369433.982:218): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 05:43:53.987542 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 14 05:43:53.982000 audit[1813]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5948ff30 a2=420 a3=0 items=0 ppid=1794 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:53.998883 sshd[1787]: Connection closed by 10.0.0.1 port 33930 Jan 14 05:43:54.001454 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jan 14 05:43:54.020515 kernel: audit: type=1300 audit(1768369433.982:218): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5948ff30 a2=420 a3=0 items=0 ppid=1794 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:54.020619 kernel: audit: type=1327 audit(1768369433.982:218): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 05:43:53.982000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 05:43:53.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.048687 kernel: audit: type=1130 audit(1768369433.983:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:53.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.065614 kernel: audit: type=1131 audit(1768369433.983:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:53.987000 audit[1788]: USER_END pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.083654 kernel: audit: type=1106 audit(1768369433.987:221): pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:53.987000 audit[1788]: CRED_DISP pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.099582 kernel: audit: type=1104 audit(1768369433.987:222): pid=1788 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.003000 audit[1783]: USER_END pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.105660 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:33930.service: Deactivated successfully. Jan 14 05:43:54.108139 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 05:43:54.110337 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. Jan 14 05:43:54.114040 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:33938.service - OpenSSH per-connection server daemon (10.0.0.1:33938). Jan 14 05:43:54.115669 systemd-logind[1596]: Removed session 7. Jan 14 05:43:54.127729 kernel: audit: type=1106 audit(1768369434.003:223): pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.003000 audit[1783]: CRED_DISP pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.152565 kernel: audit: type=1104 audit(1768369434.003:224): pid=1783 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.28:22-10.0.0.1:33930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.174366 kernel: audit: type=1131 audit(1768369434.104:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.28:22-10.0.0.1:33930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.28:22-10.0.0.1:33938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.213000 audit[1822]: USER_ACCT pid=1822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.214097 sshd[1822]: Accepted publickey for core from 10.0.0.1 port 33938 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:43:54.215000 audit[1822]: CRED_ACQ pid=1822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.215000 audit[1822]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd042988e0 a2=3 a3=0 items=0 ppid=1 pid=1822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:54.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:43:54.216971 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:43:54.227981 systemd-logind[1596]: New session 8 of user core. Jan 14 05:43:54.243899 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 05:43:54.249000 audit[1822]: USER_START pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.252000 audit[1826]: CRED_ACQ pid=1826 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:43:54.271000 audit[1827]: USER_ACCT pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.271917 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 05:43:54.271000 audit[1827]: CRED_REFR pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.272527 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 05:43:54.272000 audit[1827]: USER_START pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:43:54.941518 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 05:43:54.972855 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 05:43:55.383288 dockerd[1848]: time="2026-01-14T05:43:55.383117908Z" level=info msg="Starting up" Jan 14 05:43:55.384907 dockerd[1848]: time="2026-01-14T05:43:55.384633828Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 05:43:55.446387 dockerd[1848]: time="2026-01-14T05:43:55.446292205Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 05:43:55.609246 dockerd[1848]: time="2026-01-14T05:43:55.608911571Z" level=info msg="Loading containers: start." Jan 14 05:43:55.634464 kernel: Initializing XFRM netlink socket Jan 14 05:43:55.814000 audit[1901]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.814000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe60df8540 a2=0 a3=0 items=0 ppid=1848 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 05:43:55.825000 audit[1903]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.825000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc3c9ff360 a2=0 a3=0 items=0 ppid=1848 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 05:43:55.834000 audit[1905]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.834000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd1505440 a2=0 a3=0 items=0 ppid=1848 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 05:43:55.843000 audit[1907]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.843000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2c1f18f0 a2=0 a3=0 items=0 ppid=1848 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 05:43:55.854000 audit[1909]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.854000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1aa380c0 a2=0 a3=0 items=0 ppid=1848 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 05:43:55.865000 audit[1911]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.865000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe58f6bf90 a2=0 a3=0 items=0 ppid=1848 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 05:43:55.874000 audit[1913]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.874000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde0d16e10 a2=0 a3=0 items=0 ppid=1848 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 05:43:55.884000 audit[1915]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.884000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe065aad00 a2=0 a3=0 items=0 ppid=1848 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 05:43:55.959000 audit[1918]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.959000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff452756f0 a2=0 a3=0 items=0 ppid=1848 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 05:43:55.968000 audit[1920]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.968000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc8bdb9db0 a2=0 a3=0 items=0 ppid=1848 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 05:43:55.977000 audit[1922]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.977000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc05f204f0 a2=0 a3=0 items=0 ppid=1848 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 05:43:55.985000 audit[1924]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.985000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff576a33d0 a2=0 a3=0 items=0 ppid=1848 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 05:43:55.994000 audit[1926]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:55.994000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff55e67410 a2=0 a3=0 items=0 ppid=1848 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:55.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 05:43:56.148000 audit[1956]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.148000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc70c38f30 a2=0 a3=0 items=0 ppid=1848 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 05:43:56.159000 audit[1958]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.159000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff81590650 a2=0 a3=0 items=0 ppid=1848 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 05:43:56.167000 audit[1960]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.167000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda074d870 a2=0 a3=0 items=0 ppid=1848 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 05:43:56.175000 audit[1962]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.175000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe6e22880 a2=0 a3=0 items=0 ppid=1848 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 05:43:56.184000 audit[1964]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.184000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc33e7a1e0 a2=0 a3=0 items=0 ppid=1848 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 05:43:56.193000 audit[1966]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.193000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdbf2053d0 a2=0 a3=0 items=0 ppid=1848 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 05:43:56.203000 audit[1968]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.203000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff89d7bb80 a2=0 a3=0 items=0 ppid=1848 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 05:43:56.211000 audit[1970]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.211000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffc3da51e0 a2=0 a3=0 items=0 ppid=1848 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 05:43:56.223000 audit[1972]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.223000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff9c6c1330 a2=0 a3=0 items=0 ppid=1848 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 05:43:56.233000 audit[1974]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.233000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdeced6fc0 a2=0 a3=0 items=0 ppid=1848 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 05:43:56.241000 audit[1976]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.241000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe1cb24b80 a2=0 a3=0 items=0 ppid=1848 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 05:43:56.251000 audit[1978]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.251000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd14b746f0 a2=0 a3=0 items=0 ppid=1848 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 05:43:56.259000 audit[1980]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.259000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffeb1c0f70 a2=0 a3=0 items=0 ppid=1848 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 05:43:56.282000 audit[1985]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.282000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc4d363160 a2=0 a3=0 items=0 ppid=1848 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 05:43:56.291000 audit[1987]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.291000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea7bad660 a2=0 a3=0 items=0 ppid=1848 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 05:43:56.300000 audit[1989]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.300000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd9c77ed50 a2=0 a3=0 items=0 ppid=1848 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.300000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 05:43:56.309000 audit[1991]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.309000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd04da32d0 a2=0 a3=0 items=0 ppid=1848 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 05:43:56.318000 audit[1993]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.318000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffda8916950 a2=0 a3=0 items=0 ppid=1848 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 05:43:56.327000 audit[1995]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:43:56.327000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffef40a28e0 a2=0 a3=0 items=0 ppid=1848 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 05:43:56.365000 audit[2000]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.365000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc2f00e7b0 a2=0 a3=0 items=0 ppid=1848 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 05:43:56.374000 audit[2002]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.374000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff94859a10 a2=0 a3=0 items=0 ppid=1848 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 05:43:56.412000 audit[2010]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.412000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc6b43fe90 a2=0 a3=0 items=0 ppid=1848 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 05:43:56.445000 audit[2016]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.445000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc680912b0 a2=0 a3=0 items=0 ppid=1848 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 05:43:56.457000 audit[2018]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.457000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffed748f9a0 a2=0 a3=0 items=0 ppid=1848 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 05:43:56.466000 audit[2020]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.466000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc22f56c70 a2=0 a3=0 items=0 ppid=1848 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 05:43:56.474000 audit[2022]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.474000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffd1bb7300 a2=0 a3=0 items=0 ppid=1848 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 05:43:56.483000 audit[2024]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:43:56.483000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffee94bb160 a2=0 a3=0 items=0 ppid=1848 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:43:56.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 05:43:56.485421 systemd-networkd[1505]: docker0: Link UP Jan 14 05:43:56.495061 dockerd[1848]: time="2026-01-14T05:43:56.494863340Z" level=info msg="Loading containers: done." Jan 14 05:43:56.528971 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck104085219-merged.mount: Deactivated successfully. Jan 14 05:43:56.536854 dockerd[1848]: time="2026-01-14T05:43:56.536309732Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 05:43:56.536854 dockerd[1848]: time="2026-01-14T05:43:56.536417252Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 05:43:56.536854 dockerd[1848]: time="2026-01-14T05:43:56.536532948Z" level=info msg="Initializing buildkit" Jan 14 05:43:56.620880 dockerd[1848]: time="2026-01-14T05:43:56.620714761Z" level=info msg="Completed buildkit initialization" Jan 14 05:43:56.626566 dockerd[1848]: time="2026-01-14T05:43:56.626404040Z" level=info msg="Daemon has completed initialization" Jan 14 05:43:56.627052 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 05:43:56.627407 dockerd[1848]: time="2026-01-14T05:43:56.627028257Z" level=info msg="API listen on /run/docker.sock" Jan 14 05:43:56.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:01.408985 containerd[1620]: time="2026-01-14T05:44:01.408296687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 14 05:44:03.280767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 05:44:03.286705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:03.844743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823920627.mount: Deactivated successfully. Jan 14 05:44:06.388086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:06.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:06.393551 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 05:44:06.393611 kernel: audit: type=1130 audit(1768369446.388:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:06.417320 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 05:44:06.890957 kubelet[2135]: E0114 05:44:06.889084 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 05:44:06.915132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 05:44:06.923062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 05:44:06.928127 systemd[1]: kubelet.service: Consumed 3.305s CPU time, 109.7M memory peak. Jan 14 05:44:06.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:06.949647 kernel: audit: type=1131 audit(1768369446.927:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:08.904145 containerd[1620]: time="2026-01-14T05:44:08.903789784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:08.905408 containerd[1620]: time="2026-01-14T05:44:08.905382732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25503882" Jan 14 05:44:08.910116 containerd[1620]: time="2026-01-14T05:44:08.909988168Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:08.920438 containerd[1620]: time="2026-01-14T05:44:08.920108262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:08.923868 containerd[1620]: time="2026-01-14T05:44:08.923682309Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 7.514126331s" Jan 14 05:44:08.923868 containerd[1620]: time="2026-01-14T05:44:08.923856965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 14 05:44:08.934054 containerd[1620]: time="2026-01-14T05:44:08.934019213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 14 05:44:14.924718 containerd[1620]: time="2026-01-14T05:44:14.924345511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:14.926567 containerd[1620]: time="2026-01-14T05:44:14.925112938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 14 05:44:14.931288 containerd[1620]: time="2026-01-14T05:44:14.930891604Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:14.957435 containerd[1620]: time="2026-01-14T05:44:14.956879506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:14.970369 containerd[1620]: time="2026-01-14T05:44:14.969774754Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 6.035649285s" Jan 14 05:44:14.970369 containerd[1620]: time="2026-01-14T05:44:14.970026175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 14 05:44:14.974771 containerd[1620]: time="2026-01-14T05:44:14.974649282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 14 05:44:17.338070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 05:44:17.372454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:17.800431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:17.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:17.815340 kernel: audit: type=1130 audit(1768369457.800:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:17.827800 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 05:44:18.466263 containerd[1620]: time="2026-01-14T05:44:18.466019424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:18.467781 containerd[1620]: time="2026-01-14T05:44:18.467748666Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 14 05:44:18.470681 containerd[1620]: time="2026-01-14T05:44:18.470491920Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:18.512458 containerd[1620]: time="2026-01-14T05:44:18.512409301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:18.512702 containerd[1620]: time="2026-01-14T05:44:18.512658615Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 3.53792621s" Jan 14 05:44:18.513065 containerd[1620]: time="2026-01-14T05:44:18.512762566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 14 05:44:18.516009 containerd[1620]: time="2026-01-14T05:44:18.515592106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 14 05:44:18.616068 kubelet[2160]: E0114 05:44:18.615965 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 05:44:18.622126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 05:44:18.622538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 05:44:18.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:18.623354 systemd[1]: kubelet.service: Consumed 1.158s CPU time, 111M memory peak. Jan 14 05:44:18.642466 kernel: audit: type=1131 audit(1768369458.622:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:19.776085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616235285.mount: Deactivated successfully. Jan 14 05:44:20.333872 containerd[1620]: time="2026-01-14T05:44:20.333567747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:20.335345 containerd[1620]: time="2026-01-14T05:44:20.335316492Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 14 05:44:20.337021 containerd[1620]: time="2026-01-14T05:44:20.336950396Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:20.341662 containerd[1620]: time="2026-01-14T05:44:20.341487683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:20.342716 containerd[1620]: time="2026-01-14T05:44:20.341901887Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.826221138s" Jan 14 05:44:20.342716 containerd[1620]: time="2026-01-14T05:44:20.341928667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 14 05:44:20.343276 containerd[1620]: time="2026-01-14T05:44:20.343007126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 14 05:44:20.919796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178861854.mount: Deactivated successfully. Jan 14 05:44:22.178500 containerd[1620]: time="2026-01-14T05:44:22.178404386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:22.179849 containerd[1620]: time="2026-01-14T05:44:22.179764638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=0" Jan 14 05:44:22.181336 containerd[1620]: time="2026-01-14T05:44:22.181281309Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:22.185490 containerd[1620]: time="2026-01-14T05:44:22.185148565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:22.186622 containerd[1620]: time="2026-01-14T05:44:22.186439674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.843358722s" Jan 14 05:44:22.186622 containerd[1620]: time="2026-01-14T05:44:22.186540460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 14 05:44:22.187872 containerd[1620]: time="2026-01-14T05:44:22.187488923Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 14 05:44:22.628441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275208337.mount: Deactivated successfully. Jan 14 05:44:22.640958 containerd[1620]: time="2026-01-14T05:44:22.640766959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:22.644360 containerd[1620]: time="2026-01-14T05:44:22.644003079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 14 05:44:22.646682 containerd[1620]: time="2026-01-14T05:44:22.646424811Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:22.652334 containerd[1620]: time="2026-01-14T05:44:22.651932509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:22.652643 containerd[1620]: time="2026-01-14T05:44:22.652479854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 464.832079ms" Jan 14 05:44:22.652643 containerd[1620]: time="2026-01-14T05:44:22.652620605Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 14 05:44:22.654343 containerd[1620]: time="2026-01-14T05:44:22.653935163Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 14 05:44:23.263135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528146946.mount: Deactivated successfully. Jan 14 05:44:26.053968 containerd[1620]: time="2026-01-14T05:44:26.053734853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:26.055623 containerd[1620]: time="2026-01-14T05:44:26.055498970Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Jan 14 05:44:26.057828 containerd[1620]: time="2026-01-14T05:44:26.057666062Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:26.061071 containerd[1620]: time="2026-01-14T05:44:26.060935955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:26.061984 containerd[1620]: time="2026-01-14T05:44:26.061863392Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.407888666s" Jan 14 05:44:26.061984 containerd[1620]: time="2026-01-14T05:44:26.061939564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 14 05:44:28.779114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 05:44:28.781360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:29.055874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:29.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:29.072445 kernel: audit: type=1130 audit(1768369469.055:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:29.080627 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 05:44:29.162664 kubelet[2318]: E0114 05:44:29.162489 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 05:44:29.166332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 05:44:29.166563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 05:44:29.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:29.167370 systemd[1]: kubelet.service: Consumed 300ms CPU time, 112.2M memory peak. Jan 14 05:44:29.181330 kernel: audit: type=1131 audit(1768369469.166:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:29.620890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:29.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:29.621327 systemd[1]: kubelet.service: Consumed 300ms CPU time, 112.2M memory peak. Jan 14 05:44:29.624084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:29.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:29.650308 kernel: audit: type=1130 audit(1768369469.619:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:29.650368 kernel: audit: type=1131 audit(1768369469.620:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:29.660649 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-8.scope)... Jan 14 05:44:29.660801 systemd[1]: Reloading... Jan 14 05:44:29.766365 zram_generator::config[2377]: No configuration found. Jan 14 05:44:30.022615 systemd[1]: Reloading finished in 361 ms. Jan 14 05:44:30.058000 audit: BPF prog-id=61 op=LOAD Jan 14 05:44:30.058000 audit: BPF prog-id=58 op=UNLOAD Jan 14 05:44:30.069330 kernel: audit: type=1334 audit(1768369470.058:284): prog-id=61 op=LOAD Jan 14 05:44:30.069401 kernel: audit: type=1334 audit(1768369470.058:285): prog-id=58 op=UNLOAD Jan 14 05:44:30.069439 kernel: audit: type=1334 audit(1768369470.058:286): prog-id=62 op=LOAD Jan 14 05:44:30.069467 kernel: audit: type=1334 audit(1768369470.058:287): prog-id=63 op=LOAD Jan 14 05:44:30.069497 kernel: audit: type=1334 audit(1768369470.058:288): prog-id=59 op=UNLOAD Jan 14 05:44:30.069522 kernel: audit: type=1334 audit(1768369470.058:289): prog-id=60 op=UNLOAD Jan 14 05:44:30.058000 audit: BPF prog-id=62 op=LOAD Jan 14 05:44:30.058000 audit: BPF prog-id=63 op=LOAD Jan 14 05:44:30.058000 audit: BPF prog-id=59 op=UNLOAD Jan 14 05:44:30.058000 audit: BPF prog-id=60 op=UNLOAD Jan 14 05:44:30.060000 audit: BPF prog-id=64 op=LOAD Jan 14 05:44:30.060000 audit: BPF prog-id=57 op=UNLOAD Jan 14 05:44:30.062000 audit: BPF prog-id=65 op=LOAD Jan 14 05:44:30.062000 audit: BPF prog-id=56 op=UNLOAD Jan 14 05:44:30.064000 audit: BPF prog-id=66 op=LOAD Jan 14 05:44:30.064000 audit: BPF prog-id=41 op=UNLOAD Jan 14 05:44:30.064000 audit: BPF prog-id=67 op=LOAD Jan 14 05:44:30.064000 audit: BPF prog-id=68 op=LOAD Jan 14 05:44:30.064000 audit: BPF prog-id=42 op=UNLOAD Jan 14 05:44:30.064000 audit: BPF prog-id=43 op=UNLOAD Jan 14 05:44:30.065000 audit: BPF prog-id=69 op=LOAD Jan 14 05:44:30.065000 audit: BPF prog-id=44 op=UNLOAD Jan 14 05:44:30.065000 audit: BPF prog-id=70 op=LOAD Jan 14 05:44:30.065000 audit: BPF prog-id=71 op=LOAD Jan 14 05:44:30.065000 audit: BPF prog-id=45 op=UNLOAD Jan 14 05:44:30.065000 audit: BPF prog-id=46 op=UNLOAD Jan 14 05:44:30.066000 audit: BPF prog-id=72 op=LOAD Jan 14 05:44:30.066000 audit: BPF prog-id=51 op=UNLOAD Jan 14 05:44:30.067000 audit: BPF prog-id=73 op=LOAD Jan 14 05:44:30.067000 audit: BPF prog-id=74 op=LOAD Jan 14 05:44:30.067000 audit: BPF prog-id=52 op=UNLOAD Jan 14 05:44:30.067000 audit: BPF prog-id=53 op=UNLOAD Jan 14 05:44:30.068000 audit: BPF prog-id=75 op=LOAD Jan 14 05:44:30.068000 audit: BPF prog-id=50 op=UNLOAD Jan 14 05:44:30.071000 audit: BPF prog-id=76 op=LOAD Jan 14 05:44:30.071000 audit: BPF prog-id=47 op=UNLOAD Jan 14 05:44:30.071000 audit: BPF prog-id=77 op=LOAD Jan 14 05:44:30.071000 audit: BPF prog-id=78 op=LOAD Jan 14 05:44:30.071000 audit: BPF prog-id=48 op=UNLOAD Jan 14 05:44:30.071000 audit: BPF prog-id=49 op=UNLOAD Jan 14 05:44:30.072000 audit: BPF prog-id=79 op=LOAD Jan 14 05:44:30.072000 audit: BPF prog-id=80 op=LOAD Jan 14 05:44:30.072000 audit: BPF prog-id=54 op=UNLOAD Jan 14 05:44:30.072000 audit: BPF prog-id=55 op=UNLOAD Jan 14 05:44:30.108883 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 05:44:30.109089 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 05:44:30.109879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:30.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 05:44:30.115627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:30.331893 update_engine[1600]: I20260114 05:44:30.331456 1600 update_attempter.cc:509] Updating boot flags... Jan 14 05:44:30.441542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:30.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:30.457543 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 05:44:30.567347 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 05:44:30.567347 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 05:44:30.567347 kubelet[2426]: I0114 05:44:30.566787 2426 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 05:44:30.779332 kubelet[2426]: I0114 05:44:30.778962 2426 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 05:44:30.779332 kubelet[2426]: I0114 05:44:30.779042 2426 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 05:44:30.781833 kubelet[2426]: I0114 05:44:30.781640 2426 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 05:44:30.781833 kubelet[2426]: I0114 05:44:30.781796 2426 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 05:44:30.782272 kubelet[2426]: I0114 05:44:30.782144 2426 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 05:44:30.792991 kubelet[2426]: I0114 05:44:30.792850 2426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 05:44:30.794599 kubelet[2426]: E0114 05:44:30.794446 2426 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 05:44:30.800221 kubelet[2426]: I0114 05:44:30.799925 2426 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 05:44:30.807938 kubelet[2426]: I0114 05:44:30.807839 2426 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 05:44:30.808358 kubelet[2426]: I0114 05:44:30.808139 2426 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 05:44:30.808689 kubelet[2426]: I0114 05:44:30.808356 2426 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 05:44:30.808689 kubelet[2426]: I0114 05:44:30.808613 2426 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 05:44:30.808689 kubelet[2426]: I0114 05:44:30.808622 2426 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 05:44:30.808993 kubelet[2426]: I0114 05:44:30.808781 2426 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 05:44:30.813248 kubelet[2426]: I0114 05:44:30.813108 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 14 05:44:30.814277 kubelet[2426]: I0114 05:44:30.814073 2426 kubelet.go:475] "Attempting to sync node with API server" Jan 14 05:44:30.814319 kubelet[2426]: I0114 05:44:30.814298 2426 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 05:44:30.814319 kubelet[2426]: I0114 05:44:30.814320 2426 kubelet.go:387] "Adding apiserver pod source" Jan 14 05:44:30.814580 kubelet[2426]: I0114 05:44:30.814392 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 05:44:30.824492 kubelet[2426]: E0114 05:44:30.824303 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 05:44:30.824854 kubelet[2426]: I0114 05:44:30.824562 2426 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 05:44:30.825134 kubelet[2426]: E0114 05:44:30.824649 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 05:44:30.829143 kubelet[2426]: I0114 05:44:30.829027 2426 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 05:44:30.829536 kubelet[2426]: I0114 05:44:30.829396 2426 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 05:44:30.830402 kubelet[2426]: W0114 05:44:30.830071 2426 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 05:44:30.844896 kubelet[2426]: I0114 05:44:30.843563 2426 server.go:1262] "Started kubelet" Jan 14 05:44:30.850320 kubelet[2426]: I0114 05:44:30.849542 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 05:44:30.861616 kubelet[2426]: I0114 05:44:30.861460 2426 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 05:44:30.866699 kubelet[2426]: I0114 05:44:30.865997 2426 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 05:44:30.867519 kubelet[2426]: I0114 05:44:30.867496 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 05:44:30.870536 kubelet[2426]: E0114 05:44:30.870319 2426 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 05:44:30.871619 kubelet[2426]: E0114 05:44:30.850055 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a82aa1d25d9ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 05:44:30.841412012 +0000 UTC m=+0.376700886,LastTimestamp:2026-01-14 05:44:30.841412012 +0000 UTC m=+0.376700886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 05:44:30.876885 kubelet[2426]: I0114 05:44:30.874306 2426 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 05:44:30.880931 kubelet[2426]: I0114 05:44:30.880295 2426 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 05:44:30.889488 kubelet[2426]: I0114 05:44:30.889014 2426 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 05:44:30.889832 kubelet[2426]: I0114 05:44:30.889643 2426 reconciler.go:29] "Reconciler: start to sync state" Jan 14 05:44:30.891151 kubelet[2426]: I0114 05:44:30.890930 2426 factory.go:223] Registration of the systemd container factory successfully Jan 14 05:44:30.891151 kubelet[2426]: I0114 05:44:30.891130 2426 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 05:44:30.894016 kubelet[2426]: E0114 05:44:30.893979 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Jan 14 05:44:30.895623 kubelet[2426]: I0114 05:44:30.894124 2426 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 05:44:30.897155 kubelet[2426]: E0114 05:44:30.897127 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 05:44:30.901079 kubelet[2426]: I0114 05:44:30.901059 2426 server.go:310] "Adding debug handlers to kubelet server" Jan 14 05:44:30.904393 kubelet[2426]: I0114 05:44:30.904363 2426 factory.go:223] Registration of the containerd container factory successfully Jan 14 05:44:30.905341 kubelet[2426]: E0114 05:44:30.905085 2426 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 05:44:30.905000 audit[2460]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.905000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde2b57ae0 a2=0 a3=0 items=0 ppid=2426 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 05:44:30.920000 audit[2462]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.920000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5ee38c70 a2=0 a3=0 items=0 ppid=2426 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.920000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 05:44:30.929000 audit[2464]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.929000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc77821bd0 a2=0 a3=0 items=0 ppid=2426 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 05:44:30.940000 audit[2468]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.940000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff193fdd80 a2=0 a3=0 items=0 ppid=2426 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.940000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 05:44:30.942541 kubelet[2426]: I0114 05:44:30.942481 2426 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 05:44:30.942541 kubelet[2426]: I0114 05:44:30.942542 2426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 05:44:30.942622 kubelet[2426]: I0114 05:44:30.942558 2426 state_mem.go:36] "Initialized new in-memory state store" Jan 14 05:44:30.949103 kubelet[2426]: I0114 05:44:30.949013 2426 policy_none.go:49] "None policy: Start" Jan 14 05:44:30.949103 kubelet[2426]: I0114 05:44:30.949097 2426 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 05:44:30.949392 kubelet[2426]: I0114 05:44:30.949119 2426 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 05:44:30.952354 kubelet[2426]: I0114 05:44:30.952111 2426 policy_none.go:47] "Start" Jan 14 05:44:30.956000 audit[2471]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.956000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc1cfb1d40 a2=0 a3=0 items=0 ppid=2426 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 14 05:44:30.958697 kubelet[2426]: I0114 05:44:30.958530 2426 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 05:44:30.963100 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 05:44:30.961000 audit[2473]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.961000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe53fcaf50 a2=0 a3=0 items=0 ppid=2426 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.961000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 05:44:30.961000 audit[2472]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:30.961000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe29f53e20 a2=0 a3=0 items=0 ppid=2426 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 05:44:30.966006 kubelet[2426]: I0114 05:44:30.965339 2426 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 05:44:30.966006 kubelet[2426]: I0114 05:44:30.965359 2426 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 05:44:30.966006 kubelet[2426]: I0114 05:44:30.965502 2426 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 05:44:30.966006 kubelet[2426]: E0114 05:44:30.965551 2426 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 05:44:30.965000 audit[2474]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.965000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcdafee20 a2=0 a3=0 items=0 ppid=2426 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 05:44:30.969371 kubelet[2426]: E0114 05:44:30.969344 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 05:44:30.968000 audit[2475]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:30.968000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc65f2220 a2=0 a3=0 items=0 ppid=2426 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 05:44:30.971000 audit[2476]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:30.971000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6e4629e0 a2=0 a3=0 items=0 ppid=2426 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 05:44:30.973588 kubelet[2426]: E0114 05:44:30.973495 2426 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 05:44:30.973000 audit[2477]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:30.973000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffede906c60 a2=0 a3=0 items=0 ppid=2426 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 05:44:30.977000 audit[2478]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:30.977000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe3b7d970 a2=0 a3=0 items=0 ppid=2426 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:30.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 05:44:30.990813 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 05:44:30.998093 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 05:44:31.012095 kubelet[2426]: E0114 05:44:31.011991 2426 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 05:44:31.013449 kubelet[2426]: I0114 05:44:31.013308 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 05:44:31.013449 kubelet[2426]: I0114 05:44:31.013424 2426 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 05:44:31.017524 kubelet[2426]: I0114 05:44:31.014997 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 05:44:31.017524 kubelet[2426]: E0114 05:44:31.016854 2426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 05:44:31.017524 kubelet[2426]: E0114 05:44:31.017098 2426 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 05:44:31.090935 kubelet[2426]: I0114 05:44:31.090711 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78a5a81fd347ad4599548dcd6f1dfbbc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"78a5a81fd347ad4599548dcd6f1dfbbc\") " pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:31.093038 systemd[1]: Created slice kubepods-burstable-pod78a5a81fd347ad4599548dcd6f1dfbbc.slice - libcontainer container kubepods-burstable-pod78a5a81fd347ad4599548dcd6f1dfbbc.slice. Jan 14 05:44:31.105705 kubelet[2426]: E0114 05:44:31.105544 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Jan 14 05:44:31.106415 kubelet[2426]: E0114 05:44:31.106298 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:31.114509 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 14 05:44:31.119011 kubelet[2426]: I0114 05:44:31.118961 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 05:44:31.119638 kubelet[2426]: E0114 05:44:31.119332 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 14 05:44:31.124353 kubelet[2426]: E0114 05:44:31.123642 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:31.129980 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 14 05:44:31.134296 kubelet[2426]: E0114 05:44:31.134051 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:31.191941 kubelet[2426]: I0114 05:44:31.191893 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:31.191941 kubelet[2426]: I0114 05:44:31.191935 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:31.192631 kubelet[2426]: I0114 05:44:31.192088 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78a5a81fd347ad4599548dcd6f1dfbbc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"78a5a81fd347ad4599548dcd6f1dfbbc\") " pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:31.192631 kubelet[2426]: I0114 05:44:31.192111 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:31.192631 kubelet[2426]: I0114 05:44:31.192485 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78a5a81fd347ad4599548dcd6f1dfbbc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"78a5a81fd347ad4599548dcd6f1dfbbc\") " pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:31.192631 kubelet[2426]: I0114 05:44:31.192511 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:31.192631 kubelet[2426]: I0114 05:44:31.192547 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:31.192881 kubelet[2426]: I0114 05:44:31.192575 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:31.323519 kubelet[2426]: I0114 05:44:31.323268 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 05:44:31.324042 kubelet[2426]: E0114 05:44:31.323931 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 14 05:44:31.412623 kubelet[2426]: E0114 05:44:31.412340 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:31.414658 containerd[1620]: time="2026-01-14T05:44:31.414516978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:78a5a81fd347ad4599548dcd6f1dfbbc,Namespace:kube-system,Attempt:0,}" Jan 14 05:44:31.428583 kubelet[2426]: E0114 05:44:31.428489 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:31.429367 containerd[1620]: time="2026-01-14T05:44:31.429116210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 14 05:44:31.439825 kubelet[2426]: E0114 05:44:31.439562 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:31.442069 containerd[1620]: time="2026-01-14T05:44:31.441673299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 14 05:44:31.506926 kubelet[2426]: E0114 05:44:31.506839 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Jan 14 05:44:31.706706 kubelet[2426]: E0114 05:44:31.706479 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 05:44:31.726277 kubelet[2426]: I0114 05:44:31.725986 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 05:44:31.727202 kubelet[2426]: E0114 05:44:31.726975 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 14 05:44:31.749066 kubelet[2426]: E0114 05:44:31.748570 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 05:44:31.806077 kubelet[2426]: E0114 05:44:31.805868 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 05:44:31.892792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636968381.mount: Deactivated successfully. Jan 14 05:44:31.904607 containerd[1620]: time="2026-01-14T05:44:31.904327495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 05:44:31.911522 containerd[1620]: time="2026-01-14T05:44:31.911487268Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 05:44:31.915018 containerd[1620]: time="2026-01-14T05:44:31.914939318Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 05:44:31.918351 containerd[1620]: time="2026-01-14T05:44:31.918021045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 05:44:31.919850 containerd[1620]: time="2026-01-14T05:44:31.919651300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 05:44:31.921384 kubelet[2426]: E0114 05:44:31.921135 2426 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a82aa1d25d9ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 05:44:30.841412012 +0000 UTC m=+0.376700886,LastTimestamp:2026-01-14 05:44:30.841412012 +0000 UTC m=+0.376700886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 05:44:31.921553 containerd[1620]: time="2026-01-14T05:44:31.921365978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 05:44:31.923067 containerd[1620]: time="2026-01-14T05:44:31.922799864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 05:44:31.925064 containerd[1620]: time="2026-01-14T05:44:31.924965337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 05:44:31.926100 containerd[1620]: time="2026-01-14T05:44:31.925930783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.529785ms" Jan 14 05:44:31.931440 containerd[1620]: time="2026-01-14T05:44:31.931382759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 498.02065ms" Jan 14 05:44:31.934296 containerd[1620]: time="2026-01-14T05:44:31.933990662Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.570308ms" Jan 14 05:44:31.992654 containerd[1620]: time="2026-01-14T05:44:31.990468335Z" level=info msg="connecting to shim 1ec3c397a4dc54e7284a95d8def8626904d94ddc2b523dbe2ade2a65414c92f9" address="unix:///run/containerd/s/bc2ff221b4e67addfc333fb3df2a12b32a42a8b8658e38efd5666d49a7e87ed9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:44:31.996009 containerd[1620]: time="2026-01-14T05:44:31.995866141Z" level=info msg="connecting to shim 97fffcc8b1b07efb93fac07f3408e020239c9c9122141d460601e72af9cc84fe" address="unix:///run/containerd/s/08aff3624add2e463f0594fd3f2aa7a1c16add32e01ad5c3b18ae168cf22a456" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:44:32.009656 containerd[1620]: time="2026-01-14T05:44:32.009399039Z" level=info msg="connecting to shim da3a4e385712aae50e03c1aa0f2d874d6829eadff29ca7e2ef7cccf7f869a923" address="unix:///run/containerd/s/b9daeb59673ae4b9abf56f2a0845ef6b30d07c84292cdeb98f99184089276bd0" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:44:32.061798 systemd[1]: Started cri-containerd-1ec3c397a4dc54e7284a95d8def8626904d94ddc2b523dbe2ade2a65414c92f9.scope - libcontainer container 1ec3c397a4dc54e7284a95d8def8626904d94ddc2b523dbe2ade2a65414c92f9. Jan 14 05:44:32.065487 systemd[1]: Started cri-containerd-da3a4e385712aae50e03c1aa0f2d874d6829eadff29ca7e2ef7cccf7f869a923.scope - libcontainer container da3a4e385712aae50e03c1aa0f2d874d6829eadff29ca7e2ef7cccf7f869a923. Jan 14 05:44:32.074607 systemd[1]: Started cri-containerd-97fffcc8b1b07efb93fac07f3408e020239c9c9122141d460601e72af9cc84fe.scope - libcontainer container 97fffcc8b1b07efb93fac07f3408e020239c9c9122141d460601e72af9cc84fe. Jan 14 05:44:32.092000 audit: BPF prog-id=81 op=LOAD Jan 14 05:44:32.094000 audit: BPF prog-id=82 op=LOAD Jan 14 05:44:32.094000 audit[2547]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.094000 audit: BPF prog-id=82 op=UNLOAD Jan 14 05:44:32.094000 audit[2547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.096000 audit: BPF prog-id=83 op=LOAD Jan 14 05:44:32.098000 audit: BPF prog-id=84 op=LOAD Jan 14 05:44:32.098000 audit: BPF prog-id=85 op=LOAD Jan 14 05:44:32.098000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.098000 audit: BPF prog-id=85 op=UNLOAD Jan 14 05:44:32.098000 audit[2540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.099000 audit: BPF prog-id=86 op=LOAD Jan 14 05:44:32.099000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.099000 audit: BPF prog-id=87 op=LOAD Jan 14 05:44:32.099000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.099000 audit: BPF prog-id=87 op=UNLOAD Jan 14 05:44:32.099000 audit[2540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.099000 audit: BPF prog-id=86 op=UNLOAD Jan 14 05:44:32.099000 audit[2540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.099000 audit: BPF prog-id=88 op=LOAD Jan 14 05:44:32.099000 audit[2540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2503 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165633363333937613464633534653732383461393564386465663836 Jan 14 05:44:32.098000 audit[2547]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.100000 audit: BPF prog-id=89 op=LOAD Jan 14 05:44:32.100000 audit[2547]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.100000 audit: BPF prog-id=89 op=UNLOAD Jan 14 05:44:32.100000 audit[2547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.101000 audit: BPF prog-id=84 op=UNLOAD Jan 14 05:44:32.101000 audit[2547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.101000 audit: BPF prog-id=90 op=LOAD Jan 14 05:44:32.101000 audit[2547]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2521 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461336134653338353731326161653530653033633161613066326438 Jan 14 05:44:32.106000 audit: BPF prog-id=91 op=LOAD Jan 14 05:44:32.108000 audit: BPF prog-id=92 op=LOAD Jan 14 05:44:32.108000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.109000 audit: BPF prog-id=92 op=UNLOAD Jan 14 05:44:32.109000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.110000 audit: BPF prog-id=93 op=LOAD Jan 14 05:44:32.110000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.111000 audit: BPF prog-id=94 op=LOAD Jan 14 05:44:32.111000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.112000 audit: BPF prog-id=94 op=UNLOAD Jan 14 05:44:32.112000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.112000 audit: BPF prog-id=93 op=UNLOAD Jan 14 05:44:32.112000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.112000 audit: BPF prog-id=95 op=LOAD Jan 14 05:44:32.112000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2502 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937666666636338623162303765666239336661633037663334303865 Jan 14 05:44:32.196685 containerd[1620]: time="2026-01-14T05:44:32.196602791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ec3c397a4dc54e7284a95d8def8626904d94ddc2b523dbe2ade2a65414c92f9\"" Jan 14 05:44:32.200025 containerd[1620]: time="2026-01-14T05:44:32.199950554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:78a5a81fd347ad4599548dcd6f1dfbbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"97fffcc8b1b07efb93fac07f3408e020239c9c9122141d460601e72af9cc84fe\"" Jan 14 05:44:32.201872 kubelet[2426]: E0114 05:44:32.201605 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:32.201872 kubelet[2426]: E0114 05:44:32.201836 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:32.203118 containerd[1620]: time="2026-01-14T05:44:32.203050036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"da3a4e385712aae50e03c1aa0f2d874d6829eadff29ca7e2ef7cccf7f869a923\"" Jan 14 05:44:32.203930 kubelet[2426]: E0114 05:44:32.203907 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:32.210144 containerd[1620]: time="2026-01-14T05:44:32.210095704Z" level=info msg="CreateContainer within sandbox \"97fffcc8b1b07efb93fac07f3408e020239c9c9122141d460601e72af9cc84fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 05:44:32.214324 containerd[1620]: time="2026-01-14T05:44:32.214011092Z" level=info msg="CreateContainer within sandbox \"1ec3c397a4dc54e7284a95d8def8626904d94ddc2b523dbe2ade2a65414c92f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 05:44:32.217054 containerd[1620]: time="2026-01-14T05:44:32.217026618Z" level=info msg="CreateContainer within sandbox \"da3a4e385712aae50e03c1aa0f2d874d6829eadff29ca7e2ef7cccf7f869a923\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 05:44:32.227799 containerd[1620]: time="2026-01-14T05:44:32.227623079Z" level=info msg="Container 53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:44:32.238003 containerd[1620]: time="2026-01-14T05:44:32.237908274Z" level=info msg="Container b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:44:32.245780 containerd[1620]: time="2026-01-14T05:44:32.245452343Z" level=info msg="Container bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:44:32.251295 containerd[1620]: time="2026-01-14T05:44:32.251084610Z" level=info msg="CreateContainer within sandbox \"97fffcc8b1b07efb93fac07f3408e020239c9c9122141d460601e72af9cc84fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84\"" Jan 14 05:44:32.252512 containerd[1620]: time="2026-01-14T05:44:32.252392275Z" level=info msg="StartContainer for \"53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84\"" Jan 14 05:44:32.256082 containerd[1620]: time="2026-01-14T05:44:32.255945671Z" level=info msg="connecting to shim 53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84" address="unix:///run/containerd/s/08aff3624add2e463f0594fd3f2aa7a1c16add32e01ad5c3b18ae168cf22a456" protocol=ttrpc version=3 Jan 14 05:44:32.257827 containerd[1620]: time="2026-01-14T05:44:32.257519906Z" level=info msg="CreateContainer within sandbox \"1ec3c397a4dc54e7284a95d8def8626904d94ddc2b523dbe2ade2a65414c92f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217\"" Jan 14 05:44:32.258651 containerd[1620]: time="2026-01-14T05:44:32.258619203Z" level=info msg="StartContainer for \"b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217\"" Jan 14 05:44:32.260460 containerd[1620]: time="2026-01-14T05:44:32.260352290Z" level=info msg="connecting to shim b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217" address="unix:///run/containerd/s/bc2ff221b4e67addfc333fb3df2a12b32a42a8b8658e38efd5666d49a7e87ed9" protocol=ttrpc version=3 Jan 14 05:44:32.263354 containerd[1620]: time="2026-01-14T05:44:32.263313593Z" level=info msg="CreateContainer within sandbox \"da3a4e385712aae50e03c1aa0f2d874d6829eadff29ca7e2ef7cccf7f869a923\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570\"" Jan 14 05:44:32.264284 containerd[1620]: time="2026-01-14T05:44:32.263968528Z" level=info msg="StartContainer for \"bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570\"" Jan 14 05:44:32.265566 containerd[1620]: time="2026-01-14T05:44:32.265473362Z" level=info msg="connecting to shim bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570" address="unix:///run/containerd/s/b9daeb59673ae4b9abf56f2a0845ef6b30d07c84292cdeb98f99184089276bd0" protocol=ttrpc version=3 Jan 14 05:44:32.293027 systemd[1]: Started cri-containerd-53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84.scope - libcontainer container 53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84. Jan 14 05:44:32.304444 systemd[1]: Started cri-containerd-b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217.scope - libcontainer container b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217. Jan 14 05:44:32.308086 kubelet[2426]: E0114 05:44:32.307870 2426 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Jan 14 05:44:32.323875 systemd[1]: Started cri-containerd-bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570.scope - libcontainer container bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570. Jan 14 05:44:32.329000 audit: BPF prog-id=96 op=LOAD Jan 14 05:44:32.331000 audit: BPF prog-id=97 op=LOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.331000 audit: BPF prog-id=97 op=UNLOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.331000 audit: BPF prog-id=98 op=LOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.331000 audit: BPF prog-id=99 op=LOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.331000 audit: BPF prog-id=99 op=UNLOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.331000 audit: BPF prog-id=98 op=UNLOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.331000 audit: BPF prog-id=100 op=LOAD Jan 14 05:44:32.331000 audit[2626]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2502 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646633313133643933663733636235333439393635343163343361 Jan 14 05:44:32.336000 audit: BPF prog-id=101 op=LOAD Jan 14 05:44:32.337000 audit: BPF prog-id=102 op=LOAD Jan 14 05:44:32.337000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.338000 audit: BPF prog-id=102 op=UNLOAD Jan 14 05:44:32.338000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.338000 audit: BPF prog-id=103 op=LOAD Jan 14 05:44:32.338000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.339000 audit: BPF prog-id=104 op=LOAD Jan 14 05:44:32.339000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.340000 audit: BPF prog-id=104 op=UNLOAD Jan 14 05:44:32.340000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.340000 audit: BPF prog-id=103 op=UNLOAD Jan 14 05:44:32.340000 audit[2627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.340000 audit: BPF prog-id=105 op=LOAD Jan 14 05:44:32.340000 audit[2627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2503 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653764386238373066343162643735636233383035336662643565 Jan 14 05:44:32.355000 audit: BPF prog-id=106 op=LOAD Jan 14 05:44:32.356000 audit: BPF prog-id=107 op=LOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.356000 audit: BPF prog-id=107 op=UNLOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.356000 audit: BPF prog-id=108 op=LOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.356000 audit: BPF prog-id=109 op=LOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.356000 audit: BPF prog-id=109 op=UNLOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.356000 audit: BPF prog-id=108 op=UNLOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.356000 audit: BPF prog-id=110 op=LOAD Jan 14 05:44:32.356000 audit[2632]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2521 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:32.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262306132646532613862636266366436373765633234396334616135 Jan 14 05:44:32.381354 kubelet[2426]: E0114 05:44:32.380811 2426 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 05:44:32.401816 containerd[1620]: time="2026-01-14T05:44:32.401785898Z" level=info msg="StartContainer for \"53df3113d93f73cb534996541c43a3fb83925bdfd6dbefe4a407918dbab97a84\" returns successfully" Jan 14 05:44:32.433371 containerd[1620]: time="2026-01-14T05:44:32.431798388Z" level=info msg="StartContainer for \"b0e7d8b870f41bd75cb38053fbd5ea1aac37d7ecd8939002ff23afae687f9217\" returns successfully" Jan 14 05:44:32.453878 containerd[1620]: time="2026-01-14T05:44:32.453595284Z" level=info msg="StartContainer for \"bb0a2de2a8bcbf6d677ec249c4aa5a888a722509f0855a04cb4a1c5bfc7ce570\" returns successfully" Jan 14 05:44:32.531887 kubelet[2426]: I0114 05:44:32.531811 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 05:44:32.532375 kubelet[2426]: E0114 05:44:32.532296 2426 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Jan 14 05:44:32.992952 kubelet[2426]: E0114 05:44:32.992427 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:32.992952 kubelet[2426]: E0114 05:44:32.992643 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:33.003236 kubelet[2426]: E0114 05:44:33.002867 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:33.003236 kubelet[2426]: E0114 05:44:33.003041 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:33.010380 kubelet[2426]: E0114 05:44:33.010312 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:33.010868 kubelet[2426]: E0114 05:44:33.010703 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:34.011146 kubelet[2426]: E0114 05:44:34.011019 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:34.011586 kubelet[2426]: E0114 05:44:34.011323 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:34.011903 kubelet[2426]: E0114 05:44:34.011820 2426 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 05:44:34.012040 kubelet[2426]: E0114 05:44:34.011965 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:34.142270 kubelet[2426]: I0114 05:44:34.141311 2426 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 05:44:34.555610 kubelet[2426]: E0114 05:44:34.555548 2426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 05:44:34.675957 kubelet[2426]: I0114 05:44:34.675654 2426 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 05:44:34.766546 kubelet[2426]: I0114 05:44:34.766385 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:34.783767 kubelet[2426]: E0114 05:44:34.783588 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:34.783767 kubelet[2426]: I0114 05:44:34.783656 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:34.786071 kubelet[2426]: E0114 05:44:34.786038 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:34.786404 kubelet[2426]: I0114 05:44:34.786141 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:34.789214 kubelet[2426]: E0114 05:44:34.788993 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:34.822642 kubelet[2426]: I0114 05:44:34.821978 2426 apiserver.go:52] "Watching apiserver" Jan 14 05:44:34.889970 kubelet[2426]: I0114 05:44:34.889876 2426 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 05:44:35.010829 kubelet[2426]: I0114 05:44:35.010686 2426 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:35.014310 kubelet[2426]: E0114 05:44:35.014054 2426 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:35.014698 kubelet[2426]: E0114 05:44:35.014450 2426 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:36.991903 systemd[1]: Reload requested from client PID 2734 ('systemctl') (unit session-8.scope)... Jan 14 05:44:36.991983 systemd[1]: Reloading... Jan 14 05:44:37.140124 zram_generator::config[2783]: No configuration found. Jan 14 05:44:37.420961 systemd[1]: Reloading finished in 428 ms. Jan 14 05:44:37.468040 kubelet[2426]: I0114 05:44:37.467957 2426 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 05:44:37.468444 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:37.484079 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 05:44:37.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:37.484675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:37.484869 systemd[1]: kubelet.service: Consumed 1.527s CPU time, 123.6M memory peak. Jan 14 05:44:37.489338 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 14 05:44:37.490267 kernel: audit: type=1131 audit(1768369477.483:386): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:37.490541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 05:44:37.490000 audit: BPF prog-id=111 op=LOAD Jan 14 05:44:37.508605 kernel: audit: type=1334 audit(1768369477.490:387): prog-id=111 op=LOAD Jan 14 05:44:37.508660 kernel: audit: type=1334 audit(1768369477.490:388): prog-id=76 op=UNLOAD Jan 14 05:44:37.490000 audit: BPF prog-id=76 op=UNLOAD Jan 14 05:44:37.513424 kernel: audit: type=1334 audit(1768369477.490:389): prog-id=112 op=LOAD Jan 14 05:44:37.490000 audit: BPF prog-id=112 op=LOAD Jan 14 05:44:37.518289 kernel: audit: type=1334 audit(1768369477.490:390): prog-id=113 op=LOAD Jan 14 05:44:37.490000 audit: BPF prog-id=113 op=LOAD Jan 14 05:44:37.523283 kernel: audit: type=1334 audit(1768369477.490:391): prog-id=77 op=UNLOAD Jan 14 05:44:37.490000 audit: BPF prog-id=77 op=UNLOAD Jan 14 05:44:37.528305 kernel: audit: type=1334 audit(1768369477.490:392): prog-id=78 op=UNLOAD Jan 14 05:44:37.490000 audit: BPF prog-id=78 op=UNLOAD Jan 14 05:44:37.532967 kernel: audit: type=1334 audit(1768369477.491:393): prog-id=114 op=LOAD Jan 14 05:44:37.491000 audit: BPF prog-id=114 op=LOAD Jan 14 05:44:37.537673 kernel: audit: type=1334 audit(1768369477.491:394): prog-id=66 op=UNLOAD Jan 14 05:44:37.491000 audit: BPF prog-id=66 op=UNLOAD Jan 14 05:44:37.542692 kernel: audit: type=1334 audit(1768369477.491:395): prog-id=115 op=LOAD Jan 14 05:44:37.491000 audit: BPF prog-id=115 op=LOAD Jan 14 05:44:37.491000 audit: BPF prog-id=116 op=LOAD Jan 14 05:44:37.491000 audit: BPF prog-id=67 op=UNLOAD Jan 14 05:44:37.491000 audit: BPF prog-id=68 op=UNLOAD Jan 14 05:44:37.492000 audit: BPF prog-id=117 op=LOAD Jan 14 05:44:37.492000 audit: BPF prog-id=118 op=LOAD Jan 14 05:44:37.492000 audit: BPF prog-id=79 op=UNLOAD Jan 14 05:44:37.492000 audit: BPF prog-id=80 op=UNLOAD Jan 14 05:44:37.494000 audit: BPF prog-id=119 op=LOAD Jan 14 05:44:37.494000 audit: BPF prog-id=65 op=UNLOAD Jan 14 05:44:37.496000 audit: BPF prog-id=120 op=LOAD Jan 14 05:44:37.496000 audit: BPF prog-id=75 op=UNLOAD Jan 14 05:44:37.498000 audit: BPF prog-id=121 op=LOAD Jan 14 05:44:37.498000 audit: BPF prog-id=61 op=UNLOAD Jan 14 05:44:37.498000 audit: BPF prog-id=122 op=LOAD Jan 14 05:44:37.498000 audit: BPF prog-id=123 op=LOAD Jan 14 05:44:37.498000 audit: BPF prog-id=62 op=UNLOAD Jan 14 05:44:37.498000 audit: BPF prog-id=63 op=UNLOAD Jan 14 05:44:37.499000 audit: BPF prog-id=124 op=LOAD Jan 14 05:44:37.499000 audit: BPF prog-id=72 op=UNLOAD Jan 14 05:44:37.499000 audit: BPF prog-id=125 op=LOAD Jan 14 05:44:37.499000 audit: BPF prog-id=126 op=LOAD Jan 14 05:44:37.499000 audit: BPF prog-id=73 op=UNLOAD Jan 14 05:44:37.499000 audit: BPF prog-id=74 op=UNLOAD Jan 14 05:44:37.500000 audit: BPF prog-id=127 op=LOAD Jan 14 05:44:37.500000 audit: BPF prog-id=64 op=UNLOAD Jan 14 05:44:37.502000 audit: BPF prog-id=128 op=LOAD Jan 14 05:44:37.502000 audit: BPF prog-id=69 op=UNLOAD Jan 14 05:44:37.502000 audit: BPF prog-id=129 op=LOAD Jan 14 05:44:37.502000 audit: BPF prog-id=130 op=LOAD Jan 14 05:44:37.502000 audit: BPF prog-id=70 op=UNLOAD Jan 14 05:44:37.502000 audit: BPF prog-id=71 op=UNLOAD Jan 14 05:44:37.791131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 05:44:37.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:37.806542 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 05:44:37.923556 kubelet[2825]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 05:44:37.923556 kubelet[2825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 05:44:37.923556 kubelet[2825]: I0114 05:44:37.923497 2825 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 05:44:37.935899 kubelet[2825]: I0114 05:44:37.935676 2825 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 05:44:37.935899 kubelet[2825]: I0114 05:44:37.935803 2825 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 05:44:37.935899 kubelet[2825]: I0114 05:44:37.935834 2825 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 05:44:37.935899 kubelet[2825]: I0114 05:44:37.935840 2825 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 05:44:37.936075 kubelet[2825]: I0114 05:44:37.935981 2825 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 05:44:37.940281 kubelet[2825]: I0114 05:44:37.939879 2825 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 05:44:37.943049 kubelet[2825]: I0114 05:44:37.942622 2825 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 05:44:37.952455 kubelet[2825]: I0114 05:44:37.952424 2825 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 05:44:37.964333 kubelet[2825]: I0114 05:44:37.963774 2825 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 05:44:37.964486 kubelet[2825]: I0114 05:44:37.964393 2825 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 05:44:37.964661 kubelet[2825]: I0114 05:44:37.964476 2825 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 05:44:37.964661 kubelet[2825]: I0114 05:44:37.964655 2825 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 05:44:37.964908 kubelet[2825]: I0114 05:44:37.964665 2825 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 05:44:37.964908 kubelet[2825]: I0114 05:44:37.964692 2825 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 05:44:37.965844 kubelet[2825]: I0114 05:44:37.965522 2825 state_mem.go:36] "Initialized new in-memory state store" Jan 14 05:44:37.966421 kubelet[2825]: I0114 05:44:37.966296 2825 kubelet.go:475] "Attempting to sync node with API server" Jan 14 05:44:37.966421 kubelet[2825]: I0114 05:44:37.966359 2825 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 05:44:37.966421 kubelet[2825]: I0114 05:44:37.966383 2825 kubelet.go:387] "Adding apiserver pod source" Jan 14 05:44:37.966421 kubelet[2825]: I0114 05:44:37.966402 2825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 05:44:37.973377 kubelet[2825]: I0114 05:44:37.973356 2825 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 05:44:37.977867 kubelet[2825]: I0114 05:44:37.973991 2825 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 05:44:37.977867 kubelet[2825]: I0114 05:44:37.977307 2825 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 05:44:37.984001 kubelet[2825]: I0114 05:44:37.983930 2825 server.go:1262] "Started kubelet" Jan 14 05:44:37.985589 kubelet[2825]: I0114 05:44:37.984801 2825 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 05:44:37.985589 kubelet[2825]: I0114 05:44:37.984851 2825 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 05:44:37.985589 kubelet[2825]: I0114 05:44:37.985039 2825 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 05:44:37.985589 kubelet[2825]: I0114 05:44:37.985079 2825 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 05:44:37.985983 kubelet[2825]: I0114 05:44:37.985851 2825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 05:44:37.987682 kubelet[2825]: I0114 05:44:37.987668 2825 server.go:310] "Adding debug handlers to kubelet server" Jan 14 05:44:37.996986 kubelet[2825]: I0114 05:44:37.996909 2825 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 05:44:37.997150 kubelet[2825]: I0114 05:44:37.997070 2825 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 05:44:38.001333 kubelet[2825]: I0114 05:44:38.001144 2825 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 05:44:38.001465 kubelet[2825]: I0114 05:44:38.001431 2825 reconciler.go:29] "Reconciler: start to sync state" Jan 14 05:44:38.011922 kubelet[2825]: E0114 05:44:38.011891 2825 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 05:44:38.017568 kubelet[2825]: I0114 05:44:38.015040 2825 factory.go:223] Registration of the systemd container factory successfully Jan 14 05:44:38.017815 kubelet[2825]: I0114 05:44:38.017789 2825 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 05:44:38.028846 kubelet[2825]: I0114 05:44:38.028704 2825 factory.go:223] Registration of the containerd container factory successfully Jan 14 05:44:38.091798 kubelet[2825]: I0114 05:44:38.091409 2825 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 05:44:38.098623 kubelet[2825]: I0114 05:44:38.098601 2825 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 05:44:38.099480 kubelet[2825]: I0114 05:44:38.099277 2825 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 05:44:38.099480 kubelet[2825]: I0114 05:44:38.099304 2825 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 05:44:38.099693 kubelet[2825]: E0114 05:44:38.099674 2825 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 05:44:38.148705 kubelet[2825]: I0114 05:44:38.148569 2825 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 05:44:38.148705 kubelet[2825]: I0114 05:44:38.148639 2825 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 05:44:38.148705 kubelet[2825]: I0114 05:44:38.148657 2825 state_mem.go:36] "Initialized new in-memory state store" Jan 14 05:44:38.148967 kubelet[2825]: I0114 05:44:38.148839 2825 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 05:44:38.148967 kubelet[2825]: I0114 05:44:38.148849 2825 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 05:44:38.148967 kubelet[2825]: I0114 05:44:38.148864 2825 policy_none.go:49] "None policy: Start" Jan 14 05:44:38.148967 kubelet[2825]: I0114 05:44:38.148931 2825 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 05:44:38.148967 kubelet[2825]: I0114 05:44:38.148942 2825 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 05:44:38.149122 kubelet[2825]: I0114 05:44:38.149022 2825 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 14 05:44:38.149122 kubelet[2825]: I0114 05:44:38.149029 2825 policy_none.go:47] "Start" Jan 14 05:44:38.161373 kubelet[2825]: E0114 05:44:38.161353 2825 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 05:44:38.165319 kubelet[2825]: I0114 05:44:38.164691 2825 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 05:44:38.165319 kubelet[2825]: I0114 05:44:38.164710 2825 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 05:44:38.165830 kubelet[2825]: I0114 05:44:38.165814 2825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 05:44:38.174997 kubelet[2825]: E0114 05:44:38.174975 2825 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 05:44:38.200956 kubelet[2825]: I0114 05:44:38.200660 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:38.200956 kubelet[2825]: I0114 05:44:38.200922 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:38.203974 kubelet[2825]: I0114 05:44:38.203955 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:38.292427 kubelet[2825]: I0114 05:44:38.292062 2825 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 05:44:38.304974 kubelet[2825]: I0114 05:44:38.304580 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:38.304974 kubelet[2825]: I0114 05:44:38.304620 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78a5a81fd347ad4599548dcd6f1dfbbc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"78a5a81fd347ad4599548dcd6f1dfbbc\") " pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:38.304974 kubelet[2825]: I0114 05:44:38.304651 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78a5a81fd347ad4599548dcd6f1dfbbc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"78a5a81fd347ad4599548dcd6f1dfbbc\") " pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:38.304974 kubelet[2825]: I0114 05:44:38.304703 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:38.304974 kubelet[2825]: I0114 05:44:38.304849 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78a5a81fd347ad4599548dcd6f1dfbbc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"78a5a81fd347ad4599548dcd6f1dfbbc\") " pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:38.305436 kubelet[2825]: I0114 05:44:38.304873 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:38.305436 kubelet[2825]: I0114 05:44:38.305063 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:38.305436 kubelet[2825]: I0114 05:44:38.305295 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:38.305436 kubelet[2825]: I0114 05:44:38.305312 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 05:44:38.316670 kubelet[2825]: I0114 05:44:38.316589 2825 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 05:44:38.316670 kubelet[2825]: I0114 05:44:38.316661 2825 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 05:44:38.513875 kubelet[2825]: E0114 05:44:38.513521 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:38.518411 kubelet[2825]: E0114 05:44:38.518392 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:38.519305 kubelet[2825]: E0114 05:44:38.518584 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:38.968295 kubelet[2825]: I0114 05:44:38.968048 2825 apiserver.go:52] "Watching apiserver" Jan 14 05:44:38.998151 kubelet[2825]: I0114 05:44:38.997894 2825 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 05:44:39.144580 kubelet[2825]: I0114 05:44:39.144231 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:39.144580 kubelet[2825]: I0114 05:44:39.144357 2825 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:39.146520 kubelet[2825]: E0114 05:44:39.145958 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:39.164123 kubelet[2825]: E0114 05:44:39.163886 2825 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 05:44:39.164123 kubelet[2825]: E0114 05:44:39.164055 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:39.164637 kubelet[2825]: E0114 05:44:39.164390 2825 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 05:44:39.164637 kubelet[2825]: E0114 05:44:39.164471 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:39.181317 kubelet[2825]: I0114 05:44:39.181068 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.179872317 podStartE2EDuration="1.179872317s" podCreationTimestamp="2026-01-14 05:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 05:44:39.179638013 +0000 UTC m=+1.364117533" watchObservedRunningTime="2026-01-14 05:44:39.179872317 +0000 UTC m=+1.364351839" Jan 14 05:44:39.196933 kubelet[2825]: I0114 05:44:39.196854 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.196844163 podStartE2EDuration="1.196844163s" podCreationTimestamp="2026-01-14 05:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 05:44:39.195403581 +0000 UTC m=+1.379883102" watchObservedRunningTime="2026-01-14 05:44:39.196844163 +0000 UTC m=+1.381323675" Jan 14 05:44:40.146423 kubelet[2825]: E0114 05:44:40.146366 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:40.147119 kubelet[2825]: E0114 05:44:40.146643 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:40.147455 kubelet[2825]: E0114 05:44:40.147108 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:41.150130 kubelet[2825]: E0114 05:44:41.149927 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:42.119791 kubelet[2825]: E0114 05:44:42.119642 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:42.158852 kubelet[2825]: I0114 05:44:42.158600 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.1585869 podStartE2EDuration="4.1585869s" podCreationTimestamp="2026-01-14 05:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 05:44:39.208461918 +0000 UTC m=+1.392941439" watchObservedRunningTime="2026-01-14 05:44:42.1585869 +0000 UTC m=+4.343066421" Jan 14 05:44:42.165118 kubelet[2825]: E0114 05:44:42.165074 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:42.185310 kubelet[2825]: I0114 05:44:42.185055 2825 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 05:44:42.186551 containerd[1620]: time="2026-01-14T05:44:42.186407365Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 05:44:42.187324 kubelet[2825]: I0114 05:44:42.187295 2825 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 05:44:43.148986 systemd[1]: Created slice kubepods-besteffort-pod22b1513f_14d9_4701_947d_5ac3acbbd786.slice - libcontainer container kubepods-besteffort-pod22b1513f_14d9_4701_947d_5ac3acbbd786.slice. Jan 14 05:44:43.170345 kubelet[2825]: E0114 05:44:43.170308 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:43.242037 kubelet[2825]: I0114 05:44:43.241915 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22b1513f-14d9-4701-947d-5ac3acbbd786-kube-proxy\") pod \"kube-proxy-4phnr\" (UID: \"22b1513f-14d9-4701-947d-5ac3acbbd786\") " pod="kube-system/kube-proxy-4phnr" Jan 14 05:44:43.242287 kubelet[2825]: I0114 05:44:43.242096 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b1513f-14d9-4701-947d-5ac3acbbd786-xtables-lock\") pod \"kube-proxy-4phnr\" (UID: \"22b1513f-14d9-4701-947d-5ac3acbbd786\") " pod="kube-system/kube-proxy-4phnr" Jan 14 05:44:43.245457 kubelet[2825]: I0114 05:44:43.242128 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b1513f-14d9-4701-947d-5ac3acbbd786-lib-modules\") pod \"kube-proxy-4phnr\" (UID: \"22b1513f-14d9-4701-947d-5ac3acbbd786\") " pod="kube-system/kube-proxy-4phnr" Jan 14 05:44:43.248389 kubelet[2825]: I0114 05:44:43.248129 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkpq\" (UniqueName: \"kubernetes.io/projected/22b1513f-14d9-4701-947d-5ac3acbbd786-kube-api-access-2pkpq\") pod \"kube-proxy-4phnr\" (UID: \"22b1513f-14d9-4701-947d-5ac3acbbd786\") " pod="kube-system/kube-proxy-4phnr" Jan 14 05:44:43.436398 systemd[1]: Created slice kubepods-besteffort-pod1e3b1940_557f_411d_aac7_a1ad8a795677.slice - libcontainer container kubepods-besteffort-pod1e3b1940_557f_411d_aac7_a1ad8a795677.slice. Jan 14 05:44:43.449709 kubelet[2825]: I0114 05:44:43.449552 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e3b1940-557f-411d-aac7-a1ad8a795677-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-gcc5s\" (UID: \"1e3b1940-557f-411d-aac7-a1ad8a795677\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-gcc5s" Jan 14 05:44:43.449709 kubelet[2825]: I0114 05:44:43.449612 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qv4s\" (UniqueName: \"kubernetes.io/projected/1e3b1940-557f-411d-aac7-a1ad8a795677-kube-api-access-5qv4s\") pod \"tigera-operator-65cdcdfd6d-gcc5s\" (UID: \"1e3b1940-557f-411d-aac7-a1ad8a795677\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-gcc5s" Jan 14 05:44:43.464816 kubelet[2825]: E0114 05:44:43.464495 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:43.468358 containerd[1620]: time="2026-01-14T05:44:43.467095704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4phnr,Uid:22b1513f-14d9-4701-947d-5ac3acbbd786,Namespace:kube-system,Attempt:0,}" Jan 14 05:44:43.570109 containerd[1620]: time="2026-01-14T05:44:43.569957166Z" level=info msg="connecting to shim 7fa1bbaf782640729a2cbb8955724800012a8f96290c46854227b2a34dae052d" address="unix:///run/containerd/s/fe8d670128f74af3957d4f6d78baa3d94218fdd24e07f65e87faf675424f163a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:44:43.685876 systemd[1]: Started cri-containerd-7fa1bbaf782640729a2cbb8955724800012a8f96290c46854227b2a34dae052d.scope - libcontainer container 7fa1bbaf782640729a2cbb8955724800012a8f96290c46854227b2a34dae052d. Jan 14 05:44:43.735000 audit: BPF prog-id=131 op=LOAD Jan 14 05:44:43.742728 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 05:44:43.742836 kernel: audit: type=1334 audit(1768369483.735:428): prog-id=131 op=LOAD Jan 14 05:44:43.747081 containerd[1620]: time="2026-01-14T05:44:43.746984931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-gcc5s,Uid:1e3b1940-557f-411d-aac7-a1ad8a795677,Namespace:tigera-operator,Attempt:0,}" Jan 14 05:44:43.742000 audit: BPF prog-id=132 op=LOAD Jan 14 05:44:43.754890 kernel: audit: type=1334 audit(1768369483.742:429): prog-id=132 op=LOAD Jan 14 05:44:43.742000 audit[2904]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.781975 kernel: audit: type=1300 audit(1768369483.742:429): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.783676 kernel: audit: type=1327 audit(1768369483.742:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.792441 containerd[1620]: time="2026-01-14T05:44:43.792405095Z" level=info msg="connecting to shim e5561c3ca11b3cf9b59366226914f74ceb79285bef13e2b363e3b533d7c9fe79" address="unix:///run/containerd/s/20caea5766dba3ab36181a6d04d3b25de1499705f9e49aa82b61646cb517715f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:44:43.801397 kernel: audit: type=1334 audit(1768369483.742:430): prog-id=132 op=UNLOAD Jan 14 05:44:43.742000 audit: BPF prog-id=132 op=UNLOAD Jan 14 05:44:43.742000 audit[2904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.826968 kernel: audit: type=1300 audit(1768369483.742:430): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.827567 kernel: audit: type=1327 audit(1768369483.742:430): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.746000 audit: BPF prog-id=133 op=LOAD Jan 14 05:44:43.853261 kernel: audit: type=1334 audit(1768369483.746:431): prog-id=133 op=LOAD Jan 14 05:44:43.746000 audit[2904]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.874909 kernel: audit: type=1300 audit(1768369483.746:431): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.875011 kernel: audit: type=1327 audit(1768369483.746:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.875997 systemd[1]: Started cri-containerd-e5561c3ca11b3cf9b59366226914f74ceb79285bef13e2b363e3b533d7c9fe79.scope - libcontainer container e5561c3ca11b3cf9b59366226914f74ceb79285bef13e2b363e3b533d7c9fe79. Jan 14 05:44:43.888716 containerd[1620]: time="2026-01-14T05:44:43.888610735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4phnr,Uid:22b1513f-14d9-4701-947d-5ac3acbbd786,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fa1bbaf782640729a2cbb8955724800012a8f96290c46854227b2a34dae052d\"" Jan 14 05:44:43.890649 kubelet[2825]: E0114 05:44:43.890419 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:43.746000 audit: BPF prog-id=134 op=LOAD Jan 14 05:44:43.746000 audit[2904]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.746000 audit: BPF prog-id=134 op=UNLOAD Jan 14 05:44:43.746000 audit[2904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.746000 audit: BPF prog-id=133 op=UNLOAD Jan 14 05:44:43.746000 audit[2904]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.746000 audit: BPF prog-id=135 op=LOAD Jan 14 05:44:43.746000 audit[2904]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2892 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766613162626166373832363430373239613263626238393535373234 Jan 14 05:44:43.910000 audit: BPF prog-id=136 op=LOAD Jan 14 05:44:43.912000 audit: BPF prog-id=137 op=LOAD Jan 14 05:44:43.912000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.912000 audit: BPF prog-id=137 op=UNLOAD Jan 14 05:44:43.912000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.912000 audit: BPF prog-id=138 op=LOAD Jan 14 05:44:43.912000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.912000 audit: BPF prog-id=139 op=LOAD Jan 14 05:44:43.912000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.913000 audit: BPF prog-id=139 op=UNLOAD Jan 14 05:44:43.913000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.913000 audit: BPF prog-id=138 op=UNLOAD Jan 14 05:44:43.913000 audit[2943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.913000 audit: BPF prog-id=140 op=LOAD Jan 14 05:44:43.913000 audit[2943]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=2930 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:43.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535353631633363613131623363663962353933363632323639313466 Jan 14 05:44:43.923079 containerd[1620]: time="2026-01-14T05:44:43.923042038Z" level=info msg="CreateContainer within sandbox \"7fa1bbaf782640729a2cbb8955724800012a8f96290c46854227b2a34dae052d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 05:44:43.940291 containerd[1620]: time="2026-01-14T05:44:43.939473783Z" level=info msg="Container a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:44:43.956578 containerd[1620]: time="2026-01-14T05:44:43.956471697Z" level=info msg="CreateContainer within sandbox \"7fa1bbaf782640729a2cbb8955724800012a8f96290c46854227b2a34dae052d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534\"" Jan 14 05:44:43.958998 containerd[1620]: time="2026-01-14T05:44:43.958964517Z" level=info msg="StartContainer for \"a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534\"" Jan 14 05:44:43.964825 containerd[1620]: time="2026-01-14T05:44:43.964737353Z" level=info msg="connecting to shim a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534" address="unix:///run/containerd/s/fe8d670128f74af3957d4f6d78baa3d94218fdd24e07f65e87faf675424f163a" protocol=ttrpc version=3 Jan 14 05:44:43.985482 containerd[1620]: time="2026-01-14T05:44:43.985357558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-gcc5s,Uid:1e3b1940-557f-411d-aac7-a1ad8a795677,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e5561c3ca11b3cf9b59366226914f74ceb79285bef13e2b363e3b533d7c9fe79\"" Jan 14 05:44:43.996070 containerd[1620]: time="2026-01-14T05:44:43.995953055Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 05:44:44.014572 systemd[1]: Started cri-containerd-a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534.scope - libcontainer container a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534. Jan 14 05:44:44.115000 audit: BPF prog-id=141 op=LOAD Jan 14 05:44:44.115000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2892 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353031343239346331356164633137353132663732636430303836 Jan 14 05:44:44.115000 audit: BPF prog-id=142 op=LOAD Jan 14 05:44:44.115000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2892 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353031343239346331356164633137353132663732636430303836 Jan 14 05:44:44.115000 audit: BPF prog-id=142 op=UNLOAD Jan 14 05:44:44.115000 audit[2973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353031343239346331356164633137353132663732636430303836 Jan 14 05:44:44.115000 audit: BPF prog-id=141 op=UNLOAD Jan 14 05:44:44.115000 audit[2973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353031343239346331356164633137353132663732636430303836 Jan 14 05:44:44.116000 audit: BPF prog-id=143 op=LOAD Jan 14 05:44:44.116000 audit[2973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2892 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353031343239346331356164633137353132663732636430303836 Jan 14 05:44:44.181269 containerd[1620]: time="2026-01-14T05:44:44.181136181Z" level=info msg="StartContainer for \"a35014294c15adc17512f72cd0086ed30caac24cb7f4a08b480115db57243534\" returns successfully" Jan 14 05:44:44.580000 audit[3041]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.580000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd679af9e0 a2=0 a3=7ffd679af9cc items=0 ppid=2986 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 05:44:44.580000 audit[3040]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.580000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8b611ed0 a2=0 a3=7ffd8b611ebc items=0 ppid=2986 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 05:44:44.585000 audit[3042]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.585000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0e667ab0 a2=0 a3=7fff0e667a9c items=0 ppid=2986 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 05:44:44.589000 audit[3043]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.589000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd76fa6360 a2=0 a3=7ffd76fa634c items=0 ppid=2986 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 05:44:44.590000 audit[3044]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.590000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd6287140 a2=0 a3=6b9a98ba03de70e7 items=0 ppid=2986 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 05:44:44.601000 audit[3048]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.601000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2cbec680 a2=0 a3=7ffc2cbec66c items=0 ppid=2986 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 05:44:44.702000 audit[3049]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.702000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe32ad3ff0 a2=0 a3=7ffe32ad3fdc items=0 ppid=2986 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 05:44:44.712000 audit[3051]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.712000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd683aebc0 a2=0 a3=7ffd683aebac items=0 ppid=2986 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 14 05:44:44.725000 audit[3054]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.725000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd5fe235e0 a2=0 a3=7ffd5fe235cc items=0 ppid=2986 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 05:44:44.730000 audit[3055]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.730000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe78348620 a2=0 a3=7ffe7834860c items=0 ppid=2986 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 05:44:44.739000 audit[3057]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.739000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc004ce6f0 a2=0 a3=7ffc004ce6dc items=0 ppid=2986 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 05:44:44.744000 audit[3058]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.744000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4b80a9c0 a2=0 a3=7ffc4b80a9ac items=0 ppid=2986 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 05:44:44.754000 audit[3060]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.754000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe46b6c6e0 a2=0 a3=7ffe46b6c6cc items=0 ppid=2986 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:44.769000 audit[3063]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.769000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffddfd8d0d0 a2=0 a3=7ffddfd8d0bc items=0 ppid=2986 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:44.773000 audit[3064]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.773000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9ab8c000 a2=0 a3=7ffc9ab8bfec items=0 ppid=2986 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 05:44:44.783000 audit[3066]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.783000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee4ca2560 a2=0 a3=7ffee4ca254c items=0 ppid=2986 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 05:44:44.787000 audit[3067]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.787000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff47b7fb40 a2=0 a3=7fff47b7fb2c items=0 ppid=2986 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 05:44:44.795000 audit[3069]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.795000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff57e34090 a2=0 a3=7fff57e3407c items=0 ppid=2986 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 14 05:44:44.807000 audit[3072]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.807000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeaaacd560 a2=0 a3=7ffeaaacd54c items=0 ppid=2986 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 05:44:44.818000 audit[3075]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.818000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf5df04a0 a2=0 a3=7ffcf5df048c items=0 ppid=2986 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 05:44:44.822000 audit[3076]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.822000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1f8ab930 a2=0 a3=7ffd1f8ab91c items=0 ppid=2986 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 05:44:44.830000 audit[3078]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.830000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd1c359960 a2=0 a3=7ffd1c35994c items=0 ppid=2986 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:44.842000 audit[3081]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.842000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd11af0580 a2=0 a3=7ffd11af056c items=0 ppid=2986 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:44.846000 audit[3082]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.846000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdac31f5f0 a2=0 a3=7ffdac31f5dc items=0 ppid=2986 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 05:44:44.855000 audit[3084]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 05:44:44.855000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc2aee0e70 a2=0 a3=7ffc2aee0e5c items=0 ppid=2986 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 05:44:44.905000 audit[3090]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:44.905000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd64c57610 a2=0 a3=7ffd64c575fc items=0 ppid=2986 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:44.929000 audit[3090]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:44.929000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd64c57610 a2=0 a3=7ffd64c575fc items=0 ppid=2986 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:44.934000 audit[3095]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.934000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcc4c2b9a0 a2=0 a3=7ffcc4c2b98c items=0 ppid=2986 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 05:44:44.945000 audit[3097]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.945000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd581fcaf0 a2=0 a3=7ffd581fcadc items=0 ppid=2986 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 05:44:44.957000 audit[3100]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.957000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcdb358590 a2=0 a3=7ffcdb35857c items=0 ppid=2986 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 14 05:44:44.961000 audit[3101]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.961000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2694f8f0 a2=0 a3=7fff2694f8dc items=0 ppid=2986 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 05:44:44.969000 audit[3103]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.969000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc059ae8b0 a2=0 a3=7ffc059ae89c items=0 ppid=2986 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 05:44:44.974000 audit[3104]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.974000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd589a39a0 a2=0 a3=7ffd589a398c items=0 ppid=2986 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 05:44:44.984000 audit[3106]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:44.984000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0bac4700 a2=0 a3=7ffd0bac46ec items=0 ppid=2986 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:44.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:45.000000 audit[3109]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.000000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff781fd600 a2=0 a3=7fff781fd5ec items=0 ppid=2986 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:45.004000 audit[3110]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.004000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc91d27760 a2=0 a3=7ffc91d2774c items=0 ppid=2986 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 05:44:45.013000 audit[3112]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.013000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe23f05a30 a2=0 a3=7ffe23f05a1c items=0 ppid=2986 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 05:44:45.016000 audit[3113]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.016000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfbff4cf0 a2=0 a3=7ffcfbff4cdc items=0 ppid=2986 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 05:44:45.024000 audit[3115]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.024000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce525ae70 a2=0 a3=7ffce525ae5c items=0 ppid=2986 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 05:44:45.037000 audit[3118]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.037000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff32ed1f10 a2=0 a3=7fff32ed1efc items=0 ppid=2986 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 05:44:45.052000 audit[3121]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.052000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbe5ea480 a2=0 a3=7ffdbe5ea46c items=0 ppid=2986 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 14 05:44:45.058000 audit[3122]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.058000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc3f28b70 a2=0 a3=7ffdc3f28b5c items=0 ppid=2986 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 05:44:45.066000 audit[3124]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.066000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff3f7d1610 a2=0 a3=7fff3f7d15fc items=0 ppid=2986 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:45.078000 audit[3127]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.078000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefb461f50 a2=0 a3=7ffefb461f3c items=0 ppid=2986 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 05:44:45.083000 audit[3128]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.083000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc04300780 a2=0 a3=7ffc0430076c items=0 ppid=2986 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 05:44:45.093000 audit[3130]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.093000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffab66e200 a2=0 a3=7fffab66e1ec items=0 ppid=2986 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 05:44:45.099000 audit[3131]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.099000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc212e50c0 a2=0 a3=7ffc212e50ac items=0 ppid=2986 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 05:44:45.108000 audit[3133]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.108000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffabdb2d40 a2=0 a3=7fffabdb2d2c items=0 ppid=2986 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 05:44:45.123000 audit[3136]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 05:44:45.123000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda0f2b9e0 a2=0 a3=7ffda0f2b9cc items=0 ppid=2986 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 05:44:45.135000 audit[3138]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 05:44:45.135000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc28d65510 a2=0 a3=7ffc28d654fc items=0 ppid=2986 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.135000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:45.136000 audit[3138]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 05:44:45.136000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc28d65510 a2=0 a3=7ffc28d654fc items=0 ppid=2986 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:45.136000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:45.188362 kubelet[2825]: E0114 05:44:45.188070 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:45.207588 kubelet[2825]: I0114 05:44:45.207332 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4phnr" podStartSLOduration=2.207313663 podStartE2EDuration="2.207313663s" podCreationTimestamp="2026-01-14 05:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 05:44:45.20692088 +0000 UTC m=+7.391400402" watchObservedRunningTime="2026-01-14 05:44:45.207313663 +0000 UTC m=+7.391793204" Jan 14 05:44:45.517513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424556446.mount: Deactivated successfully. Jan 14 05:44:46.637920 containerd[1620]: time="2026-01-14T05:44:46.637342833Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:46.639501 containerd[1620]: time="2026-01-14T05:44:46.639281284Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 14 05:44:46.641877 containerd[1620]: time="2026-01-14T05:44:46.641711904Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:46.648661 containerd[1620]: time="2026-01-14T05:44:46.648541557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:44:46.650945 containerd[1620]: time="2026-01-14T05:44:46.650353488Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.65429666s" Jan 14 05:44:46.650945 containerd[1620]: time="2026-01-14T05:44:46.650470897Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 05:44:46.670003 containerd[1620]: time="2026-01-14T05:44:46.669559450Z" level=info msg="CreateContainer within sandbox \"e5561c3ca11b3cf9b59366226914f74ceb79285bef13e2b363e3b533d7c9fe79\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 05:44:46.691265 containerd[1620]: time="2026-01-14T05:44:46.691099107Z" level=info msg="Container d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:44:46.703858 containerd[1620]: time="2026-01-14T05:44:46.703528946Z" level=info msg="CreateContainer within sandbox \"e5561c3ca11b3cf9b59366226914f74ceb79285bef13e2b363e3b533d7c9fe79\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877\"" Jan 14 05:44:46.706670 containerd[1620]: time="2026-01-14T05:44:46.706559217Z" level=info msg="StartContainer for \"d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877\"" Jan 14 05:44:46.708254 containerd[1620]: time="2026-01-14T05:44:46.708112299Z" level=info msg="connecting to shim d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877" address="unix:///run/containerd/s/20caea5766dba3ab36181a6d04d3b25de1499705f9e49aa82b61646cb517715f" protocol=ttrpc version=3 Jan 14 05:44:46.773923 systemd[1]: Started cri-containerd-d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877.scope - libcontainer container d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877. Jan 14 05:44:46.803000 audit: BPF prog-id=144 op=LOAD Jan 14 05:44:46.805000 audit: BPF prog-id=145 op=LOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.805000 audit: BPF prog-id=145 op=UNLOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.805000 audit: BPF prog-id=146 op=LOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.805000 audit: BPF prog-id=147 op=LOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.805000 audit: BPF prog-id=147 op=UNLOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.805000 audit: BPF prog-id=146 op=UNLOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.805000 audit: BPF prog-id=148 op=LOAD Jan 14 05:44:46.805000 audit[3147]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2930 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:46.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432313737396535323365316231336637636666386334396635353736 Jan 14 05:44:46.879543 containerd[1620]: time="2026-01-14T05:44:46.879313210Z" level=info msg="StartContainer for \"d21779e523e1b13f7cff8c49f55766ceea0725e3ffc2751815ab462af3e9c877\" returns successfully" Jan 14 05:44:49.793967 kubelet[2825]: E0114 05:44:49.793908 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:49.843133 kubelet[2825]: I0114 05:44:49.843030 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-gcc5s" podStartSLOduration=4.178101829 podStartE2EDuration="6.843013518s" podCreationTimestamp="2026-01-14 05:44:43 +0000 UTC" firstStartedPulling="2026-01-14 05:44:43.993015422 +0000 UTC m=+6.177494944" lastFinishedPulling="2026-01-14 05:44:46.657927112 +0000 UTC m=+8.842406633" observedRunningTime="2026-01-14 05:44:47.215952928 +0000 UTC m=+9.400432449" watchObservedRunningTime="2026-01-14 05:44:49.843013518 +0000 UTC m=+12.027493040" Jan 14 05:44:49.946606 kubelet[2825]: E0114 05:44:49.946501 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:44:53.970723 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 14 05:44:53.969000 audit[1827]: USER_END pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:44:53.974775 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 05:44:53.974932 kernel: audit: type=1106 audit(1768369493.969:508): pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:44:53.983368 sshd[1826]: Connection closed by 10.0.0.1 port 33938 Jan 14 05:44:53.984939 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 14 05:44:53.969000 audit[1827]: CRED_DISP pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:44:53.991101 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:33938.service: Deactivated successfully. Jan 14 05:44:53.995062 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 05:44:53.995960 systemd[1]: session-8.scope: Consumed 11.237s CPU time, 220.8M memory peak. Jan 14 05:44:54.000332 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. Jan 14 05:44:54.002749 systemd-logind[1596]: Removed session 8. Jan 14 05:44:54.007302 kernel: audit: type=1104 audit(1768369493.969:509): pid=1827 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 05:44:53.984000 audit[1822]: USER_END pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:44:54.210793 kernel: audit: type=1106 audit(1768369493.984:510): pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:44:54.214399 kernel: audit: type=1104 audit(1768369493.984:511): pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:44:53.984000 audit[1822]: CRED_DISP pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:44:53.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.28:22-10.0.0.1:33938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:54.532121 kernel: audit: type=1131 audit(1768369493.989:512): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.28:22-10.0.0.1:33938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:44:54.696000 audit[3242]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:54.716063 kernel: audit: type=1325 audit(1768369494.696:513): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:54.716152 kernel: audit: type=1300 audit(1768369494.696:513): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe52af7f50 a2=0 a3=7ffe52af7f3c items=0 ppid=2986 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:54.696000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe52af7f50 a2=0 a3=7ffe52af7f3c items=0 ppid=2986 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:54.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:54.746000 audit[3242]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:54.769713 kernel: audit: type=1327 audit(1768369494.696:513): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:54.769777 kernel: audit: type=1325 audit(1768369494.746:514): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:54.770126 kernel: audit: type=1300 audit(1768369494.746:514): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe52af7f50 a2=0 a3=0 items=0 ppid=2986 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:54.746000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe52af7f50 a2=0 a3=0 items=0 ppid=2986 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:54.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:55.816000 audit[3244]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:55.816000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeff5e3030 a2=0 a3=7ffeff5e301c items=0 ppid=2986 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:55.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:55.821000 audit[3244]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:55.821000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeff5e3030 a2=0 a3=0 items=0 ppid=2986 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:55.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:57.822000 audit[3246]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:57.822000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffced326650 a2=0 a3=7ffced32663c items=0 ppid=2986 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:57.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:57.827000 audit[3246]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:57.827000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffced326650 a2=0 a3=0 items=0 ppid=2986 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:57.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:58.853000 audit[3249]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:58.853000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7fd97850 a2=0 a3=7ffe7fd9783c items=0 ppid=2986 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:58.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:58.868000 audit[3249]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:58.868000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7fd97850 a2=0 a3=0 items=0 ppid=2986 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:58.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:59.859435 systemd[1]: Created slice kubepods-besteffort-podcdd06579_e2ff_4089_a692_a44e2e665f93.slice - libcontainer container kubepods-besteffort-podcdd06579_e2ff_4089_a692_a44e2e665f93.slice. Jan 14 05:44:59.892432 kubelet[2825]: I0114 05:44:59.892395 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s578\" (UniqueName: \"kubernetes.io/projected/cdd06579-e2ff-4089-a692-a44e2e665f93-kube-api-access-2s578\") pod \"calico-typha-54ddc7c756-bnrv5\" (UID: \"cdd06579-e2ff-4089-a692-a44e2e665f93\") " pod="calico-system/calico-typha-54ddc7c756-bnrv5" Jan 14 05:44:59.894039 kubelet[2825]: I0114 05:44:59.893692 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdd06579-e2ff-4089-a692-a44e2e665f93-tigera-ca-bundle\") pod \"calico-typha-54ddc7c756-bnrv5\" (UID: \"cdd06579-e2ff-4089-a692-a44e2e665f93\") " pod="calico-system/calico-typha-54ddc7c756-bnrv5" Jan 14 05:44:59.894039 kubelet[2825]: I0114 05:44:59.893724 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cdd06579-e2ff-4089-a692-a44e2e665f93-typha-certs\") pod \"calico-typha-54ddc7c756-bnrv5\" (UID: \"cdd06579-e2ff-4089-a692-a44e2e665f93\") " pod="calico-system/calico-typha-54ddc7c756-bnrv5" Jan 14 05:44:59.913000 audit[3251]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:59.920548 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 05:44:59.920777 kernel: audit: type=1325 audit(1768369499.913:521): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:59.913000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc40e4ffc0 a2=0 a3=7ffc40e4ffac items=0 ppid=2986 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:59.956967 kernel: audit: type=1300 audit(1768369499.913:521): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc40e4ffc0 a2=0 a3=7ffc40e4ffac items=0 ppid=2986 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:59.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:59.964000 audit[3251]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:59.983353 kernel: audit: type=1327 audit(1768369499.913:521): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:59.983429 kernel: audit: type=1325 audit(1768369499.964:522): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:44:59.964000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc40e4ffc0 a2=0 a3=0 items=0 ppid=2986 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:44:59.986796 systemd[1]: Created slice kubepods-besteffort-podeb39adc6_bd8d_4b38_8f0b_1238dd6cafef.slice - libcontainer container kubepods-besteffort-podeb39adc6_bd8d_4b38_8f0b_1238dd6cafef.slice. Jan 14 05:44:59.994409 kubelet[2825]: I0114 05:44:59.994384 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-tigera-ca-bundle\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995273 kubelet[2825]: I0114 05:44:59.994947 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-var-lib-calico\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995273 kubelet[2825]: I0114 05:44:59.994972 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-cni-log-dir\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995273 kubelet[2825]: I0114 05:44:59.994988 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-flexvol-driver-host\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995273 kubelet[2825]: I0114 05:44:59.995001 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-var-run-calico\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995273 kubelet[2825]: I0114 05:44:59.995014 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-xtables-lock\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995406 kubelet[2825]: I0114 05:44:59.995026 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-cni-bin-dir\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995406 kubelet[2825]: I0114 05:44:59.995038 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-cni-net-dir\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995406 kubelet[2825]: I0114 05:44:59.995054 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-lib-modules\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995406 kubelet[2825]: I0114 05:44:59.995075 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-policysync\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995406 kubelet[2825]: I0114 05:44:59.995088 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt7qp\" (UniqueName: \"kubernetes.io/projected/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-kube-api-access-qt7qp\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:44:59.995510 kubelet[2825]: I0114 05:44:59.995110 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eb39adc6-bd8d-4b38-8f0b-1238dd6cafef-node-certs\") pod \"calico-node-6djrl\" (UID: \"eb39adc6-bd8d-4b38-8f0b-1238dd6cafef\") " pod="calico-system/calico-node-6djrl" Jan 14 05:45:00.006133 kernel: audit: type=1300 audit(1768369499.964:522): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc40e4ffc0 a2=0 a3=0 items=0 ppid=2986 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.008349 kernel: audit: type=1327 audit(1768369499.964:522): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:44:59.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:45:00.103053 kubelet[2825]: E0114 05:45:00.102544 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.103053 kubelet[2825]: W0114 05:45:00.102577 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.103053 kubelet[2825]: E0114 05:45:00.102615 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.122146 kubelet[2825]: E0114 05:45:00.121431 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.122146 kubelet[2825]: W0114 05:45:00.121452 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.122146 kubelet[2825]: E0114 05:45:00.121470 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.146462 kubelet[2825]: E0114 05:45:00.146382 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.146462 kubelet[2825]: W0114 05:45:00.146403 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.146462 kubelet[2825]: E0114 05:45:00.146422 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.159809 kubelet[2825]: E0114 05:45:00.159610 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:00.174629 kubelet[2825]: E0114 05:45:00.174460 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:00.177275 containerd[1620]: time="2026-01-14T05:45:00.176080083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54ddc7c756-bnrv5,Uid:cdd06579-e2ff-4089-a692-a44e2e665f93,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:00.195676 kubelet[2825]: E0114 05:45:00.195638 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.197077 kubelet[2825]: W0114 05:45:00.196472 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.197077 kubelet[2825]: E0114 05:45:00.196631 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.199994 kubelet[2825]: E0114 05:45:00.199813 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.200051 kubelet[2825]: W0114 05:45:00.200039 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.200371 kubelet[2825]: E0114 05:45:00.200058 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.202396 kubelet[2825]: E0114 05:45:00.201981 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.202396 kubelet[2825]: W0114 05:45:00.202281 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.202396 kubelet[2825]: E0114 05:45:00.202295 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.204903 kubelet[2825]: E0114 05:45:00.204521 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.208484 kubelet[2825]: W0114 05:45:00.205081 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.208484 kubelet[2825]: E0114 05:45:00.205603 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.208484 kubelet[2825]: E0114 05:45:00.208055 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.208484 kubelet[2825]: W0114 05:45:00.208454 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.208484 kubelet[2825]: E0114 05:45:00.208474 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.209266 kubelet[2825]: I0114 05:45:00.208939 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b560ec8-f090-4614-a1d5-13a4bc0ce8dc-kubelet-dir\") pod \"csi-node-driver-46w2k\" (UID: \"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc\") " pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:00.211550 kubelet[2825]: E0114 05:45:00.210063 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.211550 kubelet[2825]: W0114 05:45:00.210130 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.211550 kubelet[2825]: E0114 05:45:00.210142 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.215522 kubelet[2825]: E0114 05:45:00.215387 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.215522 kubelet[2825]: W0114 05:45:00.215446 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.215522 kubelet[2825]: E0114 05:45:00.215459 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.219416 kubelet[2825]: E0114 05:45:00.219365 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.219416 kubelet[2825]: W0114 05:45:00.219391 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.219416 kubelet[2825]: E0114 05:45:00.219403 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.220121 kubelet[2825]: E0114 05:45:00.219812 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.220121 kubelet[2825]: W0114 05:45:00.219828 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.220121 kubelet[2825]: E0114 05:45:00.219842 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.221655 kubelet[2825]: E0114 05:45:00.220825 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.221655 kubelet[2825]: W0114 05:45:00.220835 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.221655 kubelet[2825]: E0114 05:45:00.220907 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.221917 kubelet[2825]: E0114 05:45:00.221829 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.221917 kubelet[2825]: W0114 05:45:00.221911 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.221978 kubelet[2825]: E0114 05:45:00.221921 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.222915 kubelet[2825]: E0114 05:45:00.222603 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.222915 kubelet[2825]: W0114 05:45:00.222679 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.222915 kubelet[2825]: E0114 05:45:00.222695 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.223728 kubelet[2825]: E0114 05:45:00.223669 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.223728 kubelet[2825]: W0114 05:45:00.223686 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.223728 kubelet[2825]: E0114 05:45:00.223696 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.225028 kubelet[2825]: E0114 05:45:00.224954 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.225028 kubelet[2825]: W0114 05:45:00.224969 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.225028 kubelet[2825]: E0114 05:45:00.224978 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.226833 kubelet[2825]: E0114 05:45:00.226731 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.226833 kubelet[2825]: W0114 05:45:00.226752 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.226833 kubelet[2825]: E0114 05:45:00.226767 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.229096 kubelet[2825]: E0114 05:45:00.227587 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.229096 kubelet[2825]: W0114 05:45:00.227664 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.229406 kubelet[2825]: E0114 05:45:00.229308 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.232983 kubelet[2825]: E0114 05:45:00.232807 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.232983 kubelet[2825]: W0114 05:45:00.232931 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.232983 kubelet[2825]: E0114 05:45:00.232944 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.233501 kubelet[2825]: E0114 05:45:00.233410 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.233501 kubelet[2825]: W0114 05:45:00.233469 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.233501 kubelet[2825]: E0114 05:45:00.233480 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.237560 kubelet[2825]: E0114 05:45:00.237430 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.237560 kubelet[2825]: W0114 05:45:00.237511 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.237560 kubelet[2825]: E0114 05:45:00.237532 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.239616 kubelet[2825]: E0114 05:45:00.239350 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.239616 kubelet[2825]: W0114 05:45:00.239363 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.239616 kubelet[2825]: E0114 05:45:00.239372 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.240119 kubelet[2825]: E0114 05:45:00.239997 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.240119 kubelet[2825]: W0114 05:45:00.240061 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.240119 kubelet[2825]: E0114 05:45:00.240071 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.241580 kubelet[2825]: E0114 05:45:00.241524 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.241580 kubelet[2825]: W0114 05:45:00.241536 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.241580 kubelet[2825]: E0114 05:45:00.241546 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.242755 kubelet[2825]: E0114 05:45:00.242152 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.242755 kubelet[2825]: W0114 05:45:00.242271 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.242755 kubelet[2825]: E0114 05:45:00.242284 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.259658 containerd[1620]: time="2026-01-14T05:45:00.259384625Z" level=info msg="connecting to shim 1f8077c39871b3c7b262f9f8c4ef1c56c627a87e6a7c5c481feb9bfef80a8be5" address="unix:///run/containerd/s/bbede4585902873697e03e79704e5093fa131df3bf7a94308cd4c65889fb2fa3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:45:00.314358 kubelet[2825]: E0114 05:45:00.313558 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:00.319999 kubelet[2825]: E0114 05:45:00.319974 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.320130 kubelet[2825]: W0114 05:45:00.320114 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.320424 kubelet[2825]: E0114 05:45:00.320407 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.320541 kubelet[2825]: I0114 05:45:00.320522 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrlm\" (UniqueName: \"kubernetes.io/projected/2b560ec8-f090-4614-a1d5-13a4bc0ce8dc-kube-api-access-sqrlm\") pod \"csi-node-driver-46w2k\" (UID: \"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc\") " pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:00.322902 kubelet[2825]: E0114 05:45:00.322831 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.323043 kubelet[2825]: W0114 05:45:00.322951 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.323043 kubelet[2825]: E0114 05:45:00.322964 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.324266 kubelet[2825]: E0114 05:45:00.323992 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.324266 kubelet[2825]: W0114 05:45:00.324052 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.324266 kubelet[2825]: E0114 05:45:00.324063 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.326322 kubelet[2825]: E0114 05:45:00.326037 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.326380 kubelet[2825]: W0114 05:45:00.326342 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.326720 kubelet[2825]: E0114 05:45:00.326358 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.327577 containerd[1620]: time="2026-01-14T05:45:00.327360054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6djrl,Uid:eb39adc6-bd8d-4b38-8f0b-1238dd6cafef,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:00.328731 kubelet[2825]: E0114 05:45:00.328539 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.328731 kubelet[2825]: W0114 05:45:00.328595 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.328731 kubelet[2825]: E0114 05:45:00.328605 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.328731 kubelet[2825]: I0114 05:45:00.328625 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2b560ec8-f090-4614-a1d5-13a4bc0ce8dc-socket-dir\") pod \"csi-node-driver-46w2k\" (UID: \"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc\") " pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:00.329572 kubelet[2825]: E0114 05:45:00.329349 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.329572 kubelet[2825]: W0114 05:45:00.329567 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.329645 kubelet[2825]: E0114 05:45:00.329578 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.330118 kubelet[2825]: I0114 05:45:00.330066 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2b560ec8-f090-4614-a1d5-13a4bc0ce8dc-varrun\") pod \"csi-node-driver-46w2k\" (UID: \"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc\") " pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:00.333580 kubelet[2825]: E0114 05:45:00.333152 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.333761 kubelet[2825]: W0114 05:45:00.333579 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.333801 kubelet[2825]: E0114 05:45:00.333765 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.334679 kubelet[2825]: E0114 05:45:00.334535 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.334679 kubelet[2825]: W0114 05:45:00.334628 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.334679 kubelet[2825]: E0114 05:45:00.334658 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.336040 kubelet[2825]: E0114 05:45:00.335730 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.336040 kubelet[2825]: W0114 05:45:00.335804 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.336040 kubelet[2825]: E0114 05:45:00.335820 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.336590 kubelet[2825]: I0114 05:45:00.336469 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2b560ec8-f090-4614-a1d5-13a4bc0ce8dc-registration-dir\") pod \"csi-node-driver-46w2k\" (UID: \"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc\") " pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:00.336755 kubelet[2825]: E0114 05:45:00.336728 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.336755 kubelet[2825]: W0114 05:45:00.336740 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.336755 kubelet[2825]: E0114 05:45:00.336749 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.338262 kubelet[2825]: E0114 05:45:00.338078 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.338612 kubelet[2825]: W0114 05:45:00.338347 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.338612 kubelet[2825]: E0114 05:45:00.338405 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.339041 kubelet[2825]: E0114 05:45:00.338934 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.339041 kubelet[2825]: W0114 05:45:00.338993 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.339041 kubelet[2825]: E0114 05:45:00.339003 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.340115 kubelet[2825]: E0114 05:45:00.339809 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.340115 kubelet[2825]: W0114 05:45:00.339968 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.340115 kubelet[2825]: E0114 05:45:00.339979 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.341491 kubelet[2825]: E0114 05:45:00.341356 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.341491 kubelet[2825]: W0114 05:45:00.341418 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.341491 kubelet[2825]: E0114 05:45:00.341430 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.342028 kubelet[2825]: E0114 05:45:00.341956 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.342028 kubelet[2825]: W0114 05:45:00.342018 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.342028 kubelet[2825]: E0114 05:45:00.342028 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.344607 kubelet[2825]: E0114 05:45:00.343445 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.344607 kubelet[2825]: W0114 05:45:00.343510 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.344607 kubelet[2825]: E0114 05:45:00.343521 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.344607 kubelet[2825]: E0114 05:45:00.344013 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.344607 kubelet[2825]: W0114 05:45:00.344022 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.344607 kubelet[2825]: E0114 05:45:00.344031 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.354716 systemd[1]: Started cri-containerd-1f8077c39871b3c7b262f9f8c4ef1c56c627a87e6a7c5c481feb9bfef80a8be5.scope - libcontainer container 1f8077c39871b3c7b262f9f8c4ef1c56c627a87e6a7c5c481feb9bfef80a8be5. Jan 14 05:45:00.391517 containerd[1620]: time="2026-01-14T05:45:00.389746104Z" level=info msg="connecting to shim 91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc" address="unix:///run/containerd/s/09c0563195b1d67be1a967f7153904192fa272382dddeae1267081b8ffb3d47a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:45:00.392000 audit: BPF prog-id=149 op=LOAD Jan 14 05:45:00.401368 kernel: audit: type=1334 audit(1768369500.392:523): prog-id=149 op=LOAD Jan 14 05:45:00.410084 kernel: audit: type=1334 audit(1768369500.400:524): prog-id=150 op=LOAD Jan 14 05:45:00.400000 audit: BPF prog-id=150 op=LOAD Jan 14 05:45:00.400000 audit[3312]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.452241 kernel: audit: type=1300 audit(1768369500.400:524): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.452417 kernel: audit: type=1327 audit(1768369500.400:524): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.400000 audit: BPF prog-id=150 op=UNLOAD Jan 14 05:45:00.400000 audit[3312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.409000 audit: BPF prog-id=151 op=LOAD Jan 14 05:45:00.409000 audit[3312]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.409000 audit: BPF prog-id=152 op=LOAD Jan 14 05:45:00.409000 audit[3312]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.409000 audit: BPF prog-id=152 op=UNLOAD Jan 14 05:45:00.409000 audit[3312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.409000 audit: BPF prog-id=151 op=UNLOAD Jan 14 05:45:00.409000 audit[3312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.409000 audit: BPF prog-id=153 op=LOAD Jan 14 05:45:00.453514 kubelet[2825]: E0114 05:45:00.453482 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.453514 kubelet[2825]: W0114 05:45:00.453502 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.453645 kubelet[2825]: E0114 05:45:00.453524 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.409000 audit[3312]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3300 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.455062 kubelet[2825]: E0114 05:45:00.454373 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.455062 kubelet[2825]: W0114 05:45:00.454386 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.455062 kubelet[2825]: E0114 05:45:00.454399 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166383037376333393837316233633762323632663966386334656631 Jan 14 05:45:00.455797 kubelet[2825]: E0114 05:45:00.455671 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.455797 kubelet[2825]: W0114 05:45:00.455734 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.455797 kubelet[2825]: E0114 05:45:00.455746 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.458574 kubelet[2825]: E0114 05:45:00.458360 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.458574 kubelet[2825]: W0114 05:45:00.458425 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.458574 kubelet[2825]: E0114 05:45:00.458437 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.464254 kubelet[2825]: E0114 05:45:00.463840 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.464538 kubelet[2825]: W0114 05:45:00.464419 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.465079 kubelet[2825]: E0114 05:45:00.465006 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.467031 kubelet[2825]: E0114 05:45:00.466918 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.467031 kubelet[2825]: W0114 05:45:00.466987 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.467031 kubelet[2825]: E0114 05:45:00.467000 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.469423 kubelet[2825]: E0114 05:45:00.469328 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.469423 kubelet[2825]: W0114 05:45:00.469399 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.469423 kubelet[2825]: E0114 05:45:00.469412 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.470654 kubelet[2825]: E0114 05:45:00.470563 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.470654 kubelet[2825]: W0114 05:45:00.470633 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.470654 kubelet[2825]: E0114 05:45:00.470644 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.472708 kubelet[2825]: E0114 05:45:00.472615 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.472708 kubelet[2825]: W0114 05:45:00.472684 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.472708 kubelet[2825]: E0114 05:45:00.472695 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.475095 kubelet[2825]: E0114 05:45:00.475036 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.475095 kubelet[2825]: W0114 05:45:00.475048 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.475095 kubelet[2825]: E0114 05:45:00.475056 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.477380 kubelet[2825]: E0114 05:45:00.476403 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.477380 kubelet[2825]: W0114 05:45:00.476418 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.477380 kubelet[2825]: E0114 05:45:00.476429 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.478558 kubelet[2825]: E0114 05:45:00.478466 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.478558 kubelet[2825]: W0114 05:45:00.478555 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.478627 kubelet[2825]: E0114 05:45:00.478572 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.479698 systemd[1]: Started cri-containerd-91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc.scope - libcontainer container 91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc. Jan 14 05:45:00.482770 kubelet[2825]: E0114 05:45:00.482684 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.482770 kubelet[2825]: W0114 05:45:00.482696 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.482770 kubelet[2825]: E0114 05:45:00.482708 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.485035 kubelet[2825]: E0114 05:45:00.484930 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.485035 kubelet[2825]: W0114 05:45:00.484999 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.485035 kubelet[2825]: E0114 05:45:00.485011 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.486836 kubelet[2825]: E0114 05:45:00.486701 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.486836 kubelet[2825]: W0114 05:45:00.486733 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.486836 kubelet[2825]: E0114 05:45:00.486764 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.490321 kubelet[2825]: E0114 05:45:00.488524 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.490321 kubelet[2825]: W0114 05:45:00.488538 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.490321 kubelet[2825]: E0114 05:45:00.488698 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.491443 kubelet[2825]: E0114 05:45:00.491379 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.491443 kubelet[2825]: W0114 05:45:00.491441 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.491513 kubelet[2825]: E0114 05:45:00.491452 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.492654 kubelet[2825]: E0114 05:45:00.492589 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.492699 kubelet[2825]: W0114 05:45:00.492655 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.492699 kubelet[2825]: E0114 05:45:00.492666 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.494105 kubelet[2825]: E0114 05:45:00.494023 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.494105 kubelet[2825]: W0114 05:45:00.494091 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.494105 kubelet[2825]: E0114 05:45:00.494101 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.494561 kubelet[2825]: E0114 05:45:00.494543 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.494561 kubelet[2825]: W0114 05:45:00.494554 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.494613 kubelet[2825]: E0114 05:45:00.494563 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.496123 kubelet[2825]: E0114 05:45:00.496012 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:00.496123 kubelet[2825]: W0114 05:45:00.496025 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:00.496123 kubelet[2825]: E0114 05:45:00.496036 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:00.520000 audit: BPF prog-id=154 op=LOAD Jan 14 05:45:00.522000 audit: BPF prog-id=155 op=LOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.522000 audit: BPF prog-id=155 op=UNLOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.522000 audit: BPF prog-id=156 op=LOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.522000 audit: BPF prog-id=157 op=LOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.522000 audit: BPF prog-id=157 op=UNLOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.522000 audit: BPF prog-id=156 op=UNLOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.522000 audit: BPF prog-id=158 op=LOAD Jan 14 05:45:00.522000 audit[3370]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3359 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:00.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931383933626332383465653438646536373866396361623135613738 Jan 14 05:45:00.568485 containerd[1620]: time="2026-01-14T05:45:00.568417806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54ddc7c756-bnrv5,Uid:cdd06579-e2ff-4089-a692-a44e2e665f93,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f8077c39871b3c7b262f9f8c4ef1c56c627a87e6a7c5c481feb9bfef80a8be5\"" Jan 14 05:45:00.572058 kubelet[2825]: E0114 05:45:00.571648 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:00.578752 containerd[1620]: time="2026-01-14T05:45:00.578503715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 05:45:00.600342 containerd[1620]: time="2026-01-14T05:45:00.600136342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6djrl,Uid:eb39adc6-bd8d-4b38-8f0b-1238dd6cafef,Namespace:calico-system,Attempt:0,} returns sandbox id \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\"" Jan 14 05:45:00.602672 kubelet[2825]: E0114 05:45:00.602565 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:01.801624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668667048.mount: Deactivated successfully. Jan 14 05:45:02.101068 kubelet[2825]: E0114 05:45:02.100652 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:03.540513 containerd[1620]: time="2026-01-14T05:45:03.540388719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:03.542212 containerd[1620]: time="2026-01-14T05:45:03.542184348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 05:45:03.544385 containerd[1620]: time="2026-01-14T05:45:03.544020392Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:03.548063 containerd[1620]: time="2026-01-14T05:45:03.547714934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:03.548446 containerd[1620]: time="2026-01-14T05:45:03.548330001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.969744544s" Jan 14 05:45:03.548498 containerd[1620]: time="2026-01-14T05:45:03.548365366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 05:45:03.551003 containerd[1620]: time="2026-01-14T05:45:03.550764219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 05:45:03.581609 containerd[1620]: time="2026-01-14T05:45:03.581557034Z" level=info msg="CreateContainer within sandbox \"1f8077c39871b3c7b262f9f8c4ef1c56c627a87e6a7c5c481feb9bfef80a8be5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 05:45:03.600966 containerd[1620]: time="2026-01-14T05:45:03.600693175Z" level=info msg="Container 30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:45:03.615526 containerd[1620]: time="2026-01-14T05:45:03.615440421Z" level=info msg="CreateContainer within sandbox \"1f8077c39871b3c7b262f9f8c4ef1c56c627a87e6a7c5c481feb9bfef80a8be5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a\"" Jan 14 05:45:03.617309 containerd[1620]: time="2026-01-14T05:45:03.617027107Z" level=info msg="StartContainer for \"30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a\"" Jan 14 05:45:03.619127 containerd[1620]: time="2026-01-14T05:45:03.619088533Z" level=info msg="connecting to shim 30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a" address="unix:///run/containerd/s/bbede4585902873697e03e79704e5093fa131df3bf7a94308cd4c65889fb2fa3" protocol=ttrpc version=3 Jan 14 05:45:03.659651 systemd[1]: Started cri-containerd-30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a.scope - libcontainer container 30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a. Jan 14 05:45:03.720000 audit: BPF prog-id=159 op=LOAD Jan 14 05:45:03.721000 audit: BPF prog-id=160 op=LOAD Jan 14 05:45:03.721000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.721000 audit: BPF prog-id=160 op=UNLOAD Jan 14 05:45:03.721000 audit[3438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.722000 audit: BPF prog-id=161 op=LOAD Jan 14 05:45:03.722000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.722000 audit: BPF prog-id=162 op=LOAD Jan 14 05:45:03.722000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.722000 audit: BPF prog-id=162 op=UNLOAD Jan 14 05:45:03.722000 audit[3438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.722000 audit: BPF prog-id=161 op=UNLOAD Jan 14 05:45:03.722000 audit[3438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.722000 audit: BPF prog-id=163 op=LOAD Jan 14 05:45:03.722000 audit[3438]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3300 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:03.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613736386534633939393864663134313963306362396364313838 Jan 14 05:45:03.815710 containerd[1620]: time="2026-01-14T05:45:03.812717148Z" level=info msg="StartContainer for \"30a768e4c9998df1419c0cb9cd18820579861973bf58ef81965aa4f503b63a5a\" returns successfully" Jan 14 05:45:04.105778 kubelet[2825]: E0114 05:45:04.104782 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:04.325981 kubelet[2825]: E0114 05:45:04.324969 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:04.394587 kubelet[2825]: E0114 05:45:04.392018 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.394587 kubelet[2825]: W0114 05:45:04.392052 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.394587 kubelet[2825]: E0114 05:45:04.392076 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.394587 kubelet[2825]: E0114 05:45:04.394369 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.394587 kubelet[2825]: W0114 05:45:04.394382 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.394587 kubelet[2825]: E0114 05:45:04.394397 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.396366 kubelet[2825]: E0114 05:45:04.396287 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.396481 kubelet[2825]: W0114 05:45:04.396366 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.396481 kubelet[2825]: E0114 05:45:04.396383 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.398820 kubelet[2825]: E0114 05:45:04.398725 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.398820 kubelet[2825]: W0114 05:45:04.398813 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.399000 kubelet[2825]: E0114 05:45:04.398829 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.400823 kubelet[2825]: E0114 05:45:04.400699 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.400823 kubelet[2825]: W0114 05:45:04.400785 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.400823 kubelet[2825]: E0114 05:45:04.400802 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.405547 kubelet[2825]: E0114 05:45:04.404135 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.405547 kubelet[2825]: W0114 05:45:04.405537 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.405645 kubelet[2825]: E0114 05:45:04.405552 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.406686 kubelet[2825]: E0114 05:45:04.406604 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.407020 kubelet[2825]: W0114 05:45:04.406815 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.407020 kubelet[2825]: E0114 05:45:04.406970 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.411264 kubelet[2825]: E0114 05:45:04.410583 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.411264 kubelet[2825]: W0114 05:45:04.410600 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.411264 kubelet[2825]: E0114 05:45:04.410613 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.413455 kubelet[2825]: E0114 05:45:04.413367 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.413455 kubelet[2825]: W0114 05:45:04.413451 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.413526 kubelet[2825]: E0114 05:45:04.413467 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.415044 kubelet[2825]: E0114 05:45:04.414877 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.415044 kubelet[2825]: W0114 05:45:04.415038 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.415147 kubelet[2825]: E0114 05:45:04.415059 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.420277 kubelet[2825]: E0114 05:45:04.419364 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.420277 kubelet[2825]: W0114 05:45:04.419384 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.420277 kubelet[2825]: E0114 05:45:04.419402 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.422143 kubelet[2825]: E0114 05:45:04.422063 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.422423 kubelet[2825]: W0114 05:45:04.422144 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.424289 kubelet[2825]: E0114 05:45:04.422685 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.427751 kubelet[2825]: E0114 05:45:04.427659 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.427751 kubelet[2825]: W0114 05:45:04.427739 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.427827 kubelet[2825]: E0114 05:45:04.427758 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.430671 kubelet[2825]: E0114 05:45:04.430561 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.430723 kubelet[2825]: W0114 05:45:04.430668 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.430723 kubelet[2825]: E0114 05:45:04.430701 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.431454 kubelet[2825]: E0114 05:45:04.431107 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.434691 kubelet[2825]: W0114 05:45:04.434343 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.434691 kubelet[2825]: E0114 05:45:04.434447 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.459823 containerd[1620]: time="2026-01-14T05:45:04.459735392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:04.464317 containerd[1620]: time="2026-01-14T05:45:04.463524019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 05:45:04.466643 containerd[1620]: time="2026-01-14T05:45:04.466520134Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:04.470633 containerd[1620]: time="2026-01-14T05:45:04.470524304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:04.472471 containerd[1620]: time="2026-01-14T05:45:04.472134617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 921.337297ms" Jan 14 05:45:04.472471 containerd[1620]: time="2026-01-14T05:45:04.472405432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 05:45:04.482967 containerd[1620]: time="2026-01-14T05:45:04.482793108Z" level=info msg="CreateContainer within sandbox \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 05:45:04.502084 containerd[1620]: time="2026-01-14T05:45:04.501843714Z" level=info msg="Container 04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:45:04.525589 kubelet[2825]: E0114 05:45:04.525005 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.525589 kubelet[2825]: W0114 05:45:04.525035 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.525589 kubelet[2825]: E0114 05:45:04.525059 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.525589 kubelet[2825]: E0114 05:45:04.525492 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.525589 kubelet[2825]: W0114 05:45:04.525501 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.525589 kubelet[2825]: E0114 05:45:04.525510 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.526355 kubelet[2825]: E0114 05:45:04.526046 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.526355 kubelet[2825]: W0114 05:45:04.526115 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.526355 kubelet[2825]: E0114 05:45:04.526125 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.526678 kubelet[2825]: E0114 05:45:04.526587 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.526678 kubelet[2825]: W0114 05:45:04.526664 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.526678 kubelet[2825]: E0114 05:45:04.526684 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.527697 kubelet[2825]: E0114 05:45:04.527035 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.527697 kubelet[2825]: W0114 05:45:04.527047 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.527697 kubelet[2825]: E0114 05:45:04.527061 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.527697 kubelet[2825]: E0114 05:45:04.527681 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.527697 kubelet[2825]: W0114 05:45:04.527696 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.528763 kubelet[2825]: E0114 05:45:04.527710 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.528763 kubelet[2825]: E0114 05:45:04.528637 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.528763 kubelet[2825]: W0114 05:45:04.528649 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.528763 kubelet[2825]: E0114 05:45:04.528659 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.529004 containerd[1620]: time="2026-01-14T05:45:04.528390154Z" level=info msg="CreateContainer within sandbox \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29\"" Jan 14 05:45:04.531717 kubelet[2825]: E0114 05:45:04.530439 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.531717 kubelet[2825]: W0114 05:45:04.530531 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.531717 kubelet[2825]: E0114 05:45:04.530562 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.531962 containerd[1620]: time="2026-01-14T05:45:04.531083994Z" level=info msg="StartContainer for \"04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29\"" Jan 14 05:45:04.533870 containerd[1620]: time="2026-01-14T05:45:04.533669316Z" level=info msg="connecting to shim 04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29" address="unix:///run/containerd/s/09c0563195b1d67be1a967f7153904192fa272382dddeae1267081b8ffb3d47a" protocol=ttrpc version=3 Jan 14 05:45:04.535397 kubelet[2825]: E0114 05:45:04.533991 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.535397 kubelet[2825]: W0114 05:45:04.534076 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.535397 kubelet[2825]: E0114 05:45:04.534100 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.536380 kubelet[2825]: E0114 05:45:04.535811 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.536380 kubelet[2825]: W0114 05:45:04.535962 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.536380 kubelet[2825]: E0114 05:45:04.535977 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.536839 kubelet[2825]: E0114 05:45:04.536699 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.536839 kubelet[2825]: W0114 05:45:04.536786 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.536839 kubelet[2825]: E0114 05:45:04.536800 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.538564 kubelet[2825]: E0114 05:45:04.538422 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.538564 kubelet[2825]: W0114 05:45:04.538438 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.538564 kubelet[2825]: E0114 05:45:04.538452 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.540610 kubelet[2825]: E0114 05:45:04.540326 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.540610 kubelet[2825]: W0114 05:45:04.540392 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.540610 kubelet[2825]: E0114 05:45:04.540403 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.541690 kubelet[2825]: E0114 05:45:04.541385 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.541690 kubelet[2825]: W0114 05:45:04.541396 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.541690 kubelet[2825]: E0114 05:45:04.541406 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.542619 kubelet[2825]: E0114 05:45:04.542352 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.542619 kubelet[2825]: W0114 05:45:04.542372 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.542619 kubelet[2825]: E0114 05:45:04.542387 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.544298 kubelet[2825]: E0114 05:45:04.543087 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.544298 kubelet[2825]: W0114 05:45:04.543099 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.544298 kubelet[2825]: E0114 05:45:04.543110 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.544298 kubelet[2825]: E0114 05:45:04.544104 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.544298 kubelet[2825]: W0114 05:45:04.544117 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.544298 kubelet[2825]: E0114 05:45:04.544131 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.545354 kubelet[2825]: E0114 05:45:04.545116 2825 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 05:45:04.545570 kubelet[2825]: W0114 05:45:04.545476 2825 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 05:45:04.545570 kubelet[2825]: E0114 05:45:04.545561 2825 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 05:45:04.598716 systemd[1]: Started cri-containerd-04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29.scope - libcontainer container 04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29. Jan 14 05:45:04.727000 audit: BPF prog-id=164 op=LOAD Jan 14 05:45:04.727000 audit[3515]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3359 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:04.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653064393734343031306262376561666535643539393662336262 Jan 14 05:45:04.728000 audit: BPF prog-id=165 op=LOAD Jan 14 05:45:04.728000 audit[3515]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3359 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:04.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653064393734343031306262376561666535643539393662336262 Jan 14 05:45:04.728000 audit: BPF prog-id=165 op=UNLOAD Jan 14 05:45:04.728000 audit[3515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:04.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653064393734343031306262376561666535643539393662336262 Jan 14 05:45:04.728000 audit: BPF prog-id=164 op=UNLOAD Jan 14 05:45:04.728000 audit[3515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:04.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653064393734343031306262376561666535643539393662336262 Jan 14 05:45:04.728000 audit: BPF prog-id=166 op=LOAD Jan 14 05:45:04.728000 audit[3515]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3359 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:04.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034653064393734343031306262376561666535643539393662336262 Jan 14 05:45:04.806977 containerd[1620]: time="2026-01-14T05:45:04.806757783Z" level=info msg="StartContainer for \"04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29\" returns successfully" Jan 14 05:45:04.839496 systemd[1]: cri-containerd-04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29.scope: Deactivated successfully. Jan 14 05:45:04.842308 systemd[1]: cri-containerd-04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29.scope: Consumed 136ms CPU time, 6.2M memory peak, 4.6M written to disk. Jan 14 05:45:04.842000 audit: BPF prog-id=166 op=UNLOAD Jan 14 05:45:04.845357 containerd[1620]: time="2026-01-14T05:45:04.845284053Z" level=info msg="received container exit event container_id:\"04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29\" id:\"04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29\" pid:3533 exited_at:{seconds:1768369504 nanos:843648545}" Jan 14 05:45:04.920584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04e0d9744010bb7eafe5d5996b3bb9b4d5d361469804dee4108814fcfb093a29-rootfs.mount: Deactivated successfully. Jan 14 05:45:05.326784 kubelet[2825]: I0114 05:45:05.326555 2825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 05:45:05.328053 kubelet[2825]: E0114 05:45:05.327639 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:05.328053 kubelet[2825]: E0114 05:45:05.327711 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:05.330070 containerd[1620]: time="2026-01-14T05:45:05.329974484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 05:45:05.357301 kubelet[2825]: I0114 05:45:05.356540 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54ddc7c756-bnrv5" podStartSLOduration=3.380526238 podStartE2EDuration="6.356526868s" podCreationTimestamp="2026-01-14 05:44:59 +0000 UTC" firstStartedPulling="2026-01-14 05:45:00.57398577 +0000 UTC m=+22.758465291" lastFinishedPulling="2026-01-14 05:45:03.5499864 +0000 UTC m=+25.734465921" observedRunningTime="2026-01-14 05:45:04.367412744 +0000 UTC m=+26.551892265" watchObservedRunningTime="2026-01-14 05:45:05.356526868 +0000 UTC m=+27.541006389" Jan 14 05:45:06.101555 kubelet[2825]: E0114 05:45:06.101255 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:08.102202 kubelet[2825]: E0114 05:45:08.102104 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:08.679631 containerd[1620]: time="2026-01-14T05:45:08.679459433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:08.681830 containerd[1620]: time="2026-01-14T05:45:08.681741952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 05:45:08.684612 containerd[1620]: time="2026-01-14T05:45:08.684513074Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:08.689692 containerd[1620]: time="2026-01-14T05:45:08.689545678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:08.690067 containerd[1620]: time="2026-01-14T05:45:08.689858443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.359845166s" Jan 14 05:45:08.690067 containerd[1620]: time="2026-01-14T05:45:08.690023260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 05:45:08.698314 containerd[1620]: time="2026-01-14T05:45:08.698049203Z" level=info msg="CreateContainer within sandbox \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 05:45:08.714825 containerd[1620]: time="2026-01-14T05:45:08.714596285Z" level=info msg="Container d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:45:08.733756 containerd[1620]: time="2026-01-14T05:45:08.733621958Z" level=info msg="CreateContainer within sandbox \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e\"" Jan 14 05:45:08.735451 containerd[1620]: time="2026-01-14T05:45:08.735318674Z" level=info msg="StartContainer for \"d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e\"" Jan 14 05:45:08.737589 containerd[1620]: time="2026-01-14T05:45:08.737533214Z" level=info msg="connecting to shim d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e" address="unix:///run/containerd/s/09c0563195b1d67be1a967f7153904192fa272382dddeae1267081b8ffb3d47a" protocol=ttrpc version=3 Jan 14 05:45:08.786977 systemd[1]: Started cri-containerd-d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e.scope - libcontainer container d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e. Jan 14 05:45:08.892000 audit: BPF prog-id=167 op=LOAD Jan 14 05:45:08.900254 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 14 05:45:08.900497 kernel: audit: type=1334 audit(1768369508.892:553): prog-id=167 op=LOAD Jan 14 05:45:08.892000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:08.929083 kernel: audit: type=1300 audit(1768369508.892:553): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:08.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:08.951565 kernel: audit: type=1327 audit(1768369508.892:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:08.951693 kernel: audit: type=1334 audit(1768369508.893:554): prog-id=168 op=LOAD Jan 14 05:45:08.893000 audit: BPF prog-id=168 op=LOAD Jan 14 05:45:08.958527 kernel: audit: type=1300 audit(1768369508.893:554): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:08.893000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:08.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:09.002739 kernel: audit: type=1327 audit(1768369508.893:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:09.002816 kernel: audit: type=1334 audit(1768369508.893:555): prog-id=168 op=UNLOAD Jan 14 05:45:08.893000 audit: BPF prog-id=168 op=UNLOAD Jan 14 05:45:08.893000 audit[3579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:09.015023 containerd[1620]: time="2026-01-14T05:45:09.014812673Z" level=info msg="StartContainer for \"d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e\" returns successfully" Jan 14 05:45:09.027383 kernel: audit: type=1300 audit(1768369508.893:555): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:09.027458 kernel: audit: type=1327 audit(1768369508.893:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:08.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:08.893000 audit: BPF prog-id=167 op=UNLOAD Jan 14 05:45:09.050616 kernel: audit: type=1334 audit(1768369508.893:556): prog-id=167 op=UNLOAD Jan 14 05:45:08.893000 audit[3579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:08.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:08.893000 audit: BPF prog-id=169 op=LOAD Jan 14 05:45:08.893000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3359 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:08.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437343331613434633265326530313536316537356536316539306336 Jan 14 05:45:09.353586 kubelet[2825]: E0114 05:45:09.352875 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:10.093663 systemd[1]: cri-containerd-d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e.scope: Deactivated successfully. Jan 14 05:45:10.094496 systemd[1]: cri-containerd-d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e.scope: Consumed 1.236s CPU time, 178.4M memory peak, 3.4M read from disk, 171.3M written to disk. Jan 14 05:45:10.101132 containerd[1620]: time="2026-01-14T05:45:10.101032125Z" level=info msg="received container exit event container_id:\"d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e\" id:\"d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e\" pid:3592 exited_at:{seconds:1768369510 nanos:99883872}" Jan 14 05:45:10.103000 audit: BPF prog-id=169 op=UNLOAD Jan 14 05:45:10.106135 kubelet[2825]: E0114 05:45:10.106101 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:10.151445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7431a44c2e2e01561e75e61e90c65312c3b0eb58f40da223d9fd8f8f693c18e-rootfs.mount: Deactivated successfully. Jan 14 05:45:10.158267 kubelet[2825]: I0114 05:45:10.157592 2825 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 14 05:45:10.277665 systemd[1]: Created slice kubepods-burstable-pode9fea0eb_bdf0_4dc9_b3cb_7a90544d4156.slice - libcontainer container kubepods-burstable-pode9fea0eb_bdf0_4dc9_b3cb_7a90544d4156.slice. Jan 14 05:45:10.302137 systemd[1]: Created slice kubepods-besteffort-pod1cbfb118_b594_42d6_be3d_0e1840e8dae4.slice - libcontainer container kubepods-besteffort-pod1cbfb118_b594_42d6_be3d_0e1840e8dae4.slice. Jan 14 05:45:10.318548 systemd[1]: Created slice kubepods-besteffort-podf8e3b291_7413_4398_b3ac_57e03796db9f.slice - libcontainer container kubepods-besteffort-podf8e3b291_7413_4398_b3ac_57e03796db9f.slice. Jan 14 05:45:10.340644 systemd[1]: Created slice kubepods-burstable-podcefaac90_18ac_4910_8420_3803dde0c763.slice - libcontainer container kubepods-burstable-podcefaac90_18ac_4910_8420_3803dde0c763.slice. Jan 14 05:45:10.354483 systemd[1]: Created slice kubepods-besteffort-pod64a69192_713c_418d_907c_75ea3917f0cd.slice - libcontainer container kubepods-besteffort-pod64a69192_713c_418d_907c_75ea3917f0cd.slice. Jan 14 05:45:10.375480 kubelet[2825]: E0114 05:45:10.374127 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:10.375067 systemd[1]: Created slice kubepods-besteffort-pod8b419574_ee17_4c47_bc9c_99544ac25d88.slice - libcontainer container kubepods-besteffort-pod8b419574_ee17_4c47_bc9c_99544ac25d88.slice. Jan 14 05:45:10.384448 containerd[1620]: time="2026-01-14T05:45:10.384132457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 05:45:10.393629 systemd[1]: Created slice kubepods-besteffort-pode1f153ba_430a_43e5_84a9_e29936603f76.slice - libcontainer container kubepods-besteffort-pode1f153ba_430a_43e5_84a9_e29936603f76.slice. Jan 14 05:45:10.396401 kubelet[2825]: I0114 05:45:10.394568 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cefaac90-18ac-4910-8420-3803dde0c763-config-volume\") pod \"coredns-66bc5c9577-w9fc7\" (UID: \"cefaac90-18ac-4910-8420-3803dde0c763\") " pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:10.396401 kubelet[2825]: I0114 05:45:10.394596 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbfb118-b594-42d6-be3d-0e1840e8dae4-config\") pod \"goldmane-7c778bb748-h4bdc\" (UID: \"1cbfb118-b594-42d6-be3d-0e1840e8dae4\") " pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:10.396401 kubelet[2825]: I0114 05:45:10.394610 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1cbfb118-b594-42d6-be3d-0e1840e8dae4-goldmane-key-pair\") pod \"goldmane-7c778bb748-h4bdc\" (UID: \"1cbfb118-b594-42d6-be3d-0e1840e8dae4\") " pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:10.396401 kubelet[2825]: I0114 05:45:10.394625 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lmq\" (UniqueName: \"kubernetes.io/projected/f8e3b291-7413-4398-b3ac-57e03796db9f-kube-api-access-b5lmq\") pod \"calico-apiserver-c8b67549f-nrl4m\" (UID: \"f8e3b291-7413-4398-b3ac-57e03796db9f\") " pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:10.396401 kubelet[2825]: I0114 05:45:10.394641 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-ca-bundle\") pod \"whisker-55766fbd54-svfq6\" (UID: \"8b419574-ee17-4c47-bc9c-99544ac25d88\") " pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:10.396538 kubelet[2825]: I0114 05:45:10.394653 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9q9h\" (UniqueName: \"kubernetes.io/projected/cefaac90-18ac-4910-8420-3803dde0c763-kube-api-access-q9q9h\") pod \"coredns-66bc5c9577-w9fc7\" (UID: \"cefaac90-18ac-4910-8420-3803dde0c763\") " pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:10.396538 kubelet[2825]: I0114 05:45:10.394666 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e1f153ba-430a-43e5-84a9-e29936603f76-calico-apiserver-certs\") pod \"calico-apiserver-7d668c555c-qwjx8\" (UID: \"e1f153ba-430a-43e5-84a9-e29936603f76\") " pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:10.396538 kubelet[2825]: I0114 05:45:10.394680 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cbfb118-b594-42d6-be3d-0e1840e8dae4-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-h4bdc\" (UID: \"1cbfb118-b594-42d6-be3d-0e1840e8dae4\") " pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:10.396538 kubelet[2825]: I0114 05:45:10.394695 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjsvr\" (UniqueName: \"kubernetes.io/projected/e1f153ba-430a-43e5-84a9-e29936603f76-kube-api-access-jjsvr\") pod \"calico-apiserver-7d668c555c-qwjx8\" (UID: \"e1f153ba-430a-43e5-84a9-e29936603f76\") " pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:10.396538 kubelet[2825]: I0114 05:45:10.394709 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5z8\" (UniqueName: \"kubernetes.io/projected/8b419574-ee17-4c47-bc9c-99544ac25d88-kube-api-access-cv5z8\") pod \"whisker-55766fbd54-svfq6\" (UID: \"8b419574-ee17-4c47-bc9c-99544ac25d88\") " pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:10.396684 kubelet[2825]: I0114 05:45:10.394723 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156-config-volume\") pod \"coredns-66bc5c9577-5h66t\" (UID: \"e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156\") " pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:10.396684 kubelet[2825]: I0114 05:45:10.394738 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-backend-key-pair\") pod \"whisker-55766fbd54-svfq6\" (UID: \"8b419574-ee17-4c47-bc9c-99544ac25d88\") " pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:10.396684 kubelet[2825]: I0114 05:45:10.394753 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64a69192-713c-418d-907c-75ea3917f0cd-tigera-ca-bundle\") pod \"calico-kube-controllers-85449f874f-xn2d4\" (UID: \"64a69192-713c-418d-907c-75ea3917f0cd\") " pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:10.396684 kubelet[2825]: I0114 05:45:10.394926 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mn45\" (UniqueName: \"kubernetes.io/projected/64a69192-713c-418d-907c-75ea3917f0cd-kube-api-access-4mn45\") pod \"calico-kube-controllers-85449f874f-xn2d4\" (UID: \"64a69192-713c-418d-907c-75ea3917f0cd\") " pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:10.396684 kubelet[2825]: I0114 05:45:10.395099 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzc2\" (UniqueName: \"kubernetes.io/projected/e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156-kube-api-access-dmzc2\") pod \"coredns-66bc5c9577-5h66t\" (UID: \"e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156\") " pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:10.397019 kubelet[2825]: I0114 05:45:10.395294 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrw5\" (UniqueName: \"kubernetes.io/projected/1cbfb118-b594-42d6-be3d-0e1840e8dae4-kube-api-access-pqrw5\") pod \"goldmane-7c778bb748-h4bdc\" (UID: \"1cbfb118-b594-42d6-be3d-0e1840e8dae4\") " pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:10.397019 kubelet[2825]: I0114 05:45:10.395330 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8e3b291-7413-4398-b3ac-57e03796db9f-calico-apiserver-certs\") pod \"calico-apiserver-c8b67549f-nrl4m\" (UID: \"f8e3b291-7413-4398-b3ac-57e03796db9f\") " pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:10.424118 systemd[1]: Created slice kubepods-besteffort-pod5832da08_4ce6_484b_b421_5f73ad1ce8d2.slice - libcontainer container kubepods-besteffort-pod5832da08_4ce6_484b_b421_5f73ad1ce8d2.slice. Jan 14 05:45:10.497145 kubelet[2825]: I0114 05:45:10.496630 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkmpg\" (UniqueName: \"kubernetes.io/projected/5832da08-4ce6-484b-b421-5f73ad1ce8d2-kube-api-access-rkmpg\") pod \"calico-apiserver-c8b67549f-bpw89\" (UID: \"5832da08-4ce6-484b-b421-5f73ad1ce8d2\") " pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:10.497145 kubelet[2825]: I0114 05:45:10.496826 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5832da08-4ce6-484b-b421-5f73ad1ce8d2-calico-apiserver-certs\") pod \"calico-apiserver-c8b67549f-bpw89\" (UID: \"5832da08-4ce6-484b-b421-5f73ad1ce8d2\") " pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:10.601132 kubelet[2825]: E0114 05:45:10.600585 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:10.603447 containerd[1620]: time="2026-01-14T05:45:10.603408803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:10.638677 containerd[1620]: time="2026-01-14T05:45:10.638278396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:10.644813 containerd[1620]: time="2026-01-14T05:45:10.644576115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:10.651223 kubelet[2825]: E0114 05:45:10.651063 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:10.652857 containerd[1620]: time="2026-01-14T05:45:10.652823825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:10.669915 containerd[1620]: time="2026-01-14T05:45:10.669692906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:10.693692 containerd[1620]: time="2026-01-14T05:45:10.693349690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:10.734847 containerd[1620]: time="2026-01-14T05:45:10.734542245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:10.748907 containerd[1620]: time="2026-01-14T05:45:10.748749709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:11.099562 containerd[1620]: time="2026-01-14T05:45:11.099510173Z" level=error msg="Failed to destroy network for sandbox \"b60e6327cf337f0b3159e5db3cb79d40bbd32d5be00ccb5b58f772abeafeb5a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.108615 containerd[1620]: time="2026-01-14T05:45:11.107872002Z" level=error msg="Failed to destroy network for sandbox \"628743e189e7ff502e3ac1251b17a9aa90ce25d1bb8436d9177f234e69f6c524\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.145800 containerd[1620]: time="2026-01-14T05:45:11.145728824Z" level=error msg="Failed to destroy network for sandbox \"903bd1f25ed7645fcce2bb9c04ff0fd4c7da6e4c0b63c18a4488bf8cae3e73b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.222347 containerd[1620]: time="2026-01-14T05:45:11.221056656Z" level=error msg="Failed to destroy network for sandbox \"999521848ae393e3539a7de37ee9d653dc5ef6ea5818a87215c8aae04ea864fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.233155 systemd[1]: run-netns-cni\x2d9ab139a7\x2d43c9\x2d0f89\x2d05bf\x2dd94968dd821a.mount: Deactivated successfully. Jan 14 05:45:11.238725 containerd[1620]: time="2026-01-14T05:45:11.238601316Z" level=error msg="Failed to destroy network for sandbox \"27c0410d758318d913b7387ad1ca9a19a54fd59626bf58132295c2f010bfb9f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.257542 systemd[1]: run-netns-cni\x2d9a222479\x2dfe03\x2dea38\x2dcdc4\x2d527aef72ad87.mount: Deactivated successfully. Jan 14 05:45:11.275071 containerd[1620]: time="2026-01-14T05:45:11.247895547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e6327cf337f0b3159e5db3cb79d40bbd32d5be00ccb5b58f772abeafeb5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.276437 containerd[1620]: time="2026-01-14T05:45:11.276383787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"628743e189e7ff502e3ac1251b17a9aa90ce25d1bb8436d9177f234e69f6c524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.277347 kubelet[2825]: E0114 05:45:11.277303 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e6327cf337f0b3159e5db3cb79d40bbd32d5be00ccb5b58f772abeafeb5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.277727 kubelet[2825]: E0114 05:45:11.277705 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e6327cf337f0b3159e5db3cb79d40bbd32d5be00ccb5b58f772abeafeb5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:11.277884 kubelet[2825]: E0114 05:45:11.277867 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b60e6327cf337f0b3159e5db3cb79d40bbd32d5be00ccb5b58f772abeafeb5a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:11.278127 kubelet[2825]: E0114 05:45:11.278091 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b60e6327cf337f0b3159e5db3cb79d40bbd32d5be00ccb5b58f772abeafeb5a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w9fc7" podUID="cefaac90-18ac-4910-8420-3803dde0c763" Jan 14 05:45:11.280515 containerd[1620]: time="2026-01-14T05:45:11.276549345Z" level=error msg="Failed to destroy network for sandbox \"36e6c96e6982c10eb5351dc6b833fd0684609d4aadebe9c2d691220038da76cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.288877 systemd[1]: run-netns-cni\x2db02ab76b\x2de29c\x2dcee5\x2d04a7\x2dae04a750fd24.mount: Deactivated successfully. Jan 14 05:45:11.329098 containerd[1620]: time="2026-01-14T05:45:11.328891234Z" level=error msg="Failed to destroy network for sandbox \"149a676eb4a09e0ec19d869c412f81c1ef55d250c8be31bdc5fc3ec4a35b8fbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.363642 kubelet[2825]: E0114 05:45:11.362515 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628743e189e7ff502e3ac1251b17a9aa90ce25d1bb8436d9177f234e69f6c524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.363642 kubelet[2825]: E0114 05:45:11.362593 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628743e189e7ff502e3ac1251b17a9aa90ce25d1bb8436d9177f234e69f6c524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:11.363642 kubelet[2825]: E0114 05:45:11.362623 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628743e189e7ff502e3ac1251b17a9aa90ce25d1bb8436d9177f234e69f6c524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:11.363843 containerd[1620]: time="2026-01-14T05:45:11.363290283Z" level=error msg="Failed to destroy network for sandbox \"fd2889db7cf525bc460eeb5eb0d5ef31206745fc2aaf0b093a08e5424fc6e42a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.363915 kubelet[2825]: E0114 05:45:11.362690 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"628743e189e7ff502e3ac1251b17a9aa90ce25d1bb8436d9177f234e69f6c524\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:45:11.368578 systemd[1]: run-netns-cni\x2dbeccfdf0\x2d428d\x2dead2\x2d5755\x2df892bcaf2ffa.mount: Deactivated successfully. Jan 14 05:45:11.408750 containerd[1620]: time="2026-01-14T05:45:11.408595393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2889db7cf525bc460eeb5eb0d5ef31206745fc2aaf0b093a08e5424fc6e42a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.409145 kubelet[2825]: E0114 05:45:11.409049 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2889db7cf525bc460eeb5eb0d5ef31206745fc2aaf0b093a08e5424fc6e42a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.409145 kubelet[2825]: E0114 05:45:11.409119 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2889db7cf525bc460eeb5eb0d5ef31206745fc2aaf0b093a08e5424fc6e42a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:11.410646 kubelet[2825]: E0114 05:45:11.409147 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2889db7cf525bc460eeb5eb0d5ef31206745fc2aaf0b093a08e5424fc6e42a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:11.413731 containerd[1620]: time="2026-01-14T05:45:11.413625328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"903bd1f25ed7645fcce2bb9c04ff0fd4c7da6e4c0b63c18a4488bf8cae3e73b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.414105 kubelet[2825]: E0114 05:45:11.413823 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd2889db7cf525bc460eeb5eb0d5ef31206745fc2aaf0b093a08e5424fc6e42a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:45:11.415666 kubelet[2825]: E0114 05:45:11.415526 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903bd1f25ed7645fcce2bb9c04ff0fd4c7da6e4c0b63c18a4488bf8cae3e73b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.415666 kubelet[2825]: E0114 05:45:11.415634 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903bd1f25ed7645fcce2bb9c04ff0fd4c7da6e4c0b63c18a4488bf8cae3e73b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:11.415666 kubelet[2825]: E0114 05:45:11.415657 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903bd1f25ed7645fcce2bb9c04ff0fd4c7da6e4c0b63c18a4488bf8cae3e73b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:11.415802 kubelet[2825]: E0114 05:45:11.415702 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"903bd1f25ed7645fcce2bb9c04ff0fd4c7da6e4c0b63c18a4488bf8cae3e73b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:45:11.422214 containerd[1620]: time="2026-01-14T05:45:11.421775984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"999521848ae393e3539a7de37ee9d653dc5ef6ea5818a87215c8aae04ea864fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.422791 kubelet[2825]: E0114 05:45:11.422745 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"999521848ae393e3539a7de37ee9d653dc5ef6ea5818a87215c8aae04ea864fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.422845 kubelet[2825]: E0114 05:45:11.422806 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"999521848ae393e3539a7de37ee9d653dc5ef6ea5818a87215c8aae04ea864fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:11.422845 kubelet[2825]: E0114 05:45:11.422830 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"999521848ae393e3539a7de37ee9d653dc5ef6ea5818a87215c8aae04ea864fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:11.422915 kubelet[2825]: E0114 05:45:11.422881 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"999521848ae393e3539a7de37ee9d653dc5ef6ea5818a87215c8aae04ea864fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5h66t" podUID="e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156" Jan 14 05:45:11.427878 containerd[1620]: time="2026-01-14T05:45:11.427384934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e6c96e6982c10eb5351dc6b833fd0684609d4aadebe9c2d691220038da76cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.429072 containerd[1620]: time="2026-01-14T05:45:11.428892066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0410d758318d913b7387ad1ca9a19a54fd59626bf58132295c2f010bfb9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.429914 kubelet[2825]: E0114 05:45:11.429744 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0410d758318d913b7387ad1ca9a19a54fd59626bf58132295c2f010bfb9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.430907 kubelet[2825]: E0114 05:45:11.430629 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0410d758318d913b7387ad1ca9a19a54fd59626bf58132295c2f010bfb9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:11.430907 kubelet[2825]: E0114 05:45:11.430718 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0410d758318d913b7387ad1ca9a19a54fd59626bf58132295c2f010bfb9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:11.430907 kubelet[2825]: E0114 05:45:11.430777 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c0410d758318d913b7387ad1ca9a19a54fd59626bf58132295c2f010bfb9f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:45:11.431336 kubelet[2825]: E0114 05:45:11.430148 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e6c96e6982c10eb5351dc6b833fd0684609d4aadebe9c2d691220038da76cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.431336 kubelet[2825]: E0114 05:45:11.430842 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e6c96e6982c10eb5351dc6b833fd0684609d4aadebe9c2d691220038da76cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:11.431336 kubelet[2825]: E0114 05:45:11.430867 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e6c96e6982c10eb5351dc6b833fd0684609d4aadebe9c2d691220038da76cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:11.431408 kubelet[2825]: E0114 05:45:11.430910 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36e6c96e6982c10eb5351dc6b833fd0684609d4aadebe9c2d691220038da76cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55766fbd54-svfq6" podUID="8b419574-ee17-4c47-bc9c-99544ac25d88" Jan 14 05:45:11.435684 containerd[1620]: time="2026-01-14T05:45:11.435078944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"149a676eb4a09e0ec19d869c412f81c1ef55d250c8be31bdc5fc3ec4a35b8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.437370 kubelet[2825]: E0114 05:45:11.436649 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"149a676eb4a09e0ec19d869c412f81c1ef55d250c8be31bdc5fc3ec4a35b8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:11.437370 kubelet[2825]: E0114 05:45:11.436767 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"149a676eb4a09e0ec19d869c412f81c1ef55d250c8be31bdc5fc3ec4a35b8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:11.437370 kubelet[2825]: E0114 05:45:11.436791 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"149a676eb4a09e0ec19d869c412f81c1ef55d250c8be31bdc5fc3ec4a35b8fbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:11.437930 kubelet[2825]: E0114 05:45:11.436838 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"149a676eb4a09e0ec19d869c412f81c1ef55d250c8be31bdc5fc3ec4a35b8fbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:45:12.112418 systemd[1]: Created slice kubepods-besteffort-pod2b560ec8_f090_4614_a1d5_13a4bc0ce8dc.slice - libcontainer container kubepods-besteffort-pod2b560ec8_f090_4614_a1d5_13a4bc0ce8dc.slice. Jan 14 05:45:12.125816 containerd[1620]: time="2026-01-14T05:45:12.125721979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:12.154873 systemd[1]: run-netns-cni\x2d2aa75783\x2db20d\x2d1ff0\x2d83be\x2d864bf674972c.mount: Deactivated successfully. Jan 14 05:45:12.329485 containerd[1620]: time="2026-01-14T05:45:12.329426186Z" level=error msg="Failed to destroy network for sandbox \"8fe2f8f498a3fb4aaca8610304ca2d9eefdb30fb94bc7ee86ea1211031f25bea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:12.335891 systemd[1]: run-netns-cni\x2dd723b6d3\x2d4634\x2d4c9f\x2df5d7\x2d8a1312196bd0.mount: Deactivated successfully. Jan 14 05:45:12.341314 containerd[1620]: time="2026-01-14T05:45:12.340612918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe2f8f498a3fb4aaca8610304ca2d9eefdb30fb94bc7ee86ea1211031f25bea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:12.342378 kubelet[2825]: E0114 05:45:12.341941 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe2f8f498a3fb4aaca8610304ca2d9eefdb30fb94bc7ee86ea1211031f25bea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:12.342378 kubelet[2825]: E0114 05:45:12.342293 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe2f8f498a3fb4aaca8610304ca2d9eefdb30fb94bc7ee86ea1211031f25bea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:12.342378 kubelet[2825]: E0114 05:45:12.342325 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe2f8f498a3fb4aaca8610304ca2d9eefdb30fb94bc7ee86ea1211031f25bea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:12.342920 kubelet[2825]: E0114 05:45:12.342393 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fe2f8f498a3fb4aaca8610304ca2d9eefdb30fb94bc7ee86ea1211031f25bea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:22.140848 containerd[1620]: time="2026-01-14T05:45:22.140473216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:22.696926 containerd[1620]: time="2026-01-14T05:45:22.695770298Z" level=error msg="Failed to destroy network for sandbox \"58651b0e33be27269e12b92987e778f71255bb6cc703375c5488f37c543a234d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:22.708804 systemd[1]: run-netns-cni\x2de704dc6e\x2d8b09\x2d1865\x2db6d3\x2d25e944f5d390.mount: Deactivated successfully. Jan 14 05:45:22.718859 containerd[1620]: time="2026-01-14T05:45:22.713615468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58651b0e33be27269e12b92987e778f71255bb6cc703375c5488f37c543a234d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:22.719570 kubelet[2825]: E0114 05:45:22.716093 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58651b0e33be27269e12b92987e778f71255bb6cc703375c5488f37c543a234d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:22.719570 kubelet[2825]: E0114 05:45:22.716627 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58651b0e33be27269e12b92987e778f71255bb6cc703375c5488f37c543a234d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:22.719570 kubelet[2825]: E0114 05:45:22.716655 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58651b0e33be27269e12b92987e778f71255bb6cc703375c5488f37c543a234d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:22.720011 kubelet[2825]: E0114 05:45:22.716735 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58651b0e33be27269e12b92987e778f71255bb6cc703375c5488f37c543a234d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55766fbd54-svfq6" podUID="8b419574-ee17-4c47-bc9c-99544ac25d88" Jan 14 05:45:24.116989 containerd[1620]: time="2026-01-14T05:45:24.116927914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:24.129830 containerd[1620]: time="2026-01-14T05:45:24.129682873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:24.134543 containerd[1620]: time="2026-01-14T05:45:24.134107968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:24.140985 kubelet[2825]: E0114 05:45:24.140533 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:24.142650 containerd[1620]: time="2026-01-14T05:45:24.142569809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:24.632583 containerd[1620]: time="2026-01-14T05:45:24.632096741Z" level=error msg="Failed to destroy network for sandbox \"48925e39940f9233635631311e9dff5ed3ac0f253d56b18554ca7a2ecfb08de2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.637534 containerd[1620]: time="2026-01-14T05:45:24.636467335Z" level=error msg="Failed to destroy network for sandbox \"e53223dc80ef4884f34f57bb23cfbcb14ccaf37840340561abd4e12fa7f8ad8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.639959 systemd[1]: run-netns-cni\x2d9c6001cc\x2d10f6\x2de76b\x2d4bcd\x2d869800d0f78b.mount: Deactivated successfully. Jan 14 05:45:24.657930 containerd[1620]: time="2026-01-14T05:45:24.657872145Z" level=error msg="Failed to destroy network for sandbox \"961ef325ad68c8ad31364b1a8d32a55764ba5fbddb009b7071581b481718a7ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.664899 containerd[1620]: time="2026-01-14T05:45:24.659583516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48925e39940f9233635631311e9dff5ed3ac0f253d56b18554ca7a2ecfb08de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.665518 kubelet[2825]: E0114 05:45:24.663568 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48925e39940f9233635631311e9dff5ed3ac0f253d56b18554ca7a2ecfb08de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.665518 kubelet[2825]: E0114 05:45:24.663648 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48925e39940f9233635631311e9dff5ed3ac0f253d56b18554ca7a2ecfb08de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:24.665518 kubelet[2825]: E0114 05:45:24.663672 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48925e39940f9233635631311e9dff5ed3ac0f253d56b18554ca7a2ecfb08de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:24.665651 kubelet[2825]: E0114 05:45:24.663749 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48925e39940f9233635631311e9dff5ed3ac0f253d56b18554ca7a2ecfb08de2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5h66t" podUID="e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156" Jan 14 05:45:24.665977 containerd[1620]: time="2026-01-14T05:45:24.665896695Z" level=error msg="Failed to destroy network for sandbox \"1b7cd589e0325514aaefdee53917d975d430b1d0025b52fa6aa8d190c8355c2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.673108 containerd[1620]: time="2026-01-14T05:45:24.671849808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53223dc80ef4884f34f57bb23cfbcb14ccaf37840340561abd4e12fa7f8ad8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.673642 kubelet[2825]: E0114 05:45:24.672666 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53223dc80ef4884f34f57bb23cfbcb14ccaf37840340561abd4e12fa7f8ad8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.673642 kubelet[2825]: E0114 05:45:24.672708 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53223dc80ef4884f34f57bb23cfbcb14ccaf37840340561abd4e12fa7f8ad8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:24.673642 kubelet[2825]: E0114 05:45:24.672733 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53223dc80ef4884f34f57bb23cfbcb14ccaf37840340561abd4e12fa7f8ad8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:24.673756 kubelet[2825]: E0114 05:45:24.672783 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e53223dc80ef4884f34f57bb23cfbcb14ccaf37840340561abd4e12fa7f8ad8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:45:24.723702 containerd[1620]: time="2026-01-14T05:45:24.690566251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"961ef325ad68c8ad31364b1a8d32a55764ba5fbddb009b7071581b481718a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.723702 containerd[1620]: time="2026-01-14T05:45:24.706573130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7cd589e0325514aaefdee53917d975d430b1d0025b52fa6aa8d190c8355c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.725106 kubelet[2825]: E0114 05:45:24.725053 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7cd589e0325514aaefdee53917d975d430b1d0025b52fa6aa8d190c8355c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.725828 kubelet[2825]: E0114 05:45:24.725520 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961ef325ad68c8ad31364b1a8d32a55764ba5fbddb009b7071581b481718a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:24.725828 kubelet[2825]: E0114 05:45:24.725574 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7cd589e0325514aaefdee53917d975d430b1d0025b52fa6aa8d190c8355c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:24.725828 kubelet[2825]: E0114 05:45:24.725601 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961ef325ad68c8ad31364b1a8d32a55764ba5fbddb009b7071581b481718a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:24.725828 kubelet[2825]: E0114 05:45:24.725635 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961ef325ad68c8ad31364b1a8d32a55764ba5fbddb009b7071581b481718a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:24.726019 kubelet[2825]: E0114 05:45:24.725707 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"961ef325ad68c8ad31364b1a8d32a55764ba5fbddb009b7071581b481718a7ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:45:24.726019 kubelet[2825]: E0114 05:45:24.725605 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7cd589e0325514aaefdee53917d975d430b1d0025b52fa6aa8d190c8355c2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:24.726568 kubelet[2825]: E0114 05:45:24.725779 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b7cd589e0325514aaefdee53917d975d430b1d0025b52fa6aa8d190c8355c2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:45:25.134692 containerd[1620]: time="2026-01-14T05:45:25.132494295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:25.154632 containerd[1620]: time="2026-01-14T05:45:25.153875363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:25.167151 systemd[1]: run-netns-cni\x2d432aa77d\x2defeb\x2df733\x2d871a\x2defcb9ce76a19.mount: Deactivated successfully. Jan 14 05:45:25.172695 systemd[1]: run-netns-cni\x2df064fe21\x2d58a1\x2da777\x2d6990\x2d11aea5aa20b0.mount: Deactivated successfully. Jan 14 05:45:25.172899 systemd[1]: run-netns-cni\x2d9bf57d70\x2db9e9\x2d3098\x2dde9a\x2d62aeeb3e3bdb.mount: Deactivated successfully. Jan 14 05:45:25.706674 containerd[1620]: time="2026-01-14T05:45:25.705957503Z" level=error msg="Failed to destroy network for sandbox \"9c27cb17fb988e462a6c6400dd9e8f9693bfeff776bfc528fc507a452577d5ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:25.714710 systemd[1]: run-netns-cni\x2d8961afc7\x2dd4a6\x2d410c\x2d29c9\x2d64b33f2c4339.mount: Deactivated successfully. Jan 14 05:45:25.741074 containerd[1620]: time="2026-01-14T05:45:25.739917987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c27cb17fb988e462a6c6400dd9e8f9693bfeff776bfc528fc507a452577d5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:25.744597 kubelet[2825]: E0114 05:45:25.740529 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c27cb17fb988e462a6c6400dd9e8f9693bfeff776bfc528fc507a452577d5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:25.744597 kubelet[2825]: E0114 05:45:25.740590 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c27cb17fb988e462a6c6400dd9e8f9693bfeff776bfc528fc507a452577d5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:25.744597 kubelet[2825]: E0114 05:45:25.740613 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c27cb17fb988e462a6c6400dd9e8f9693bfeff776bfc528fc507a452577d5ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:25.745060 kubelet[2825]: E0114 05:45:25.740674 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c27cb17fb988e462a6c6400dd9e8f9693bfeff776bfc528fc507a452577d5ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:45:25.760579 containerd[1620]: time="2026-01-14T05:45:25.756972483Z" level=error msg="Failed to destroy network for sandbox \"072ba9be8361f1bc5add96cbfca5c296449f8def0d081b22fe289e688266f638\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:25.774144 systemd[1]: run-netns-cni\x2d54e95bcf\x2d8949\x2d80a9\x2d5193\x2d56a20ca4fb5d.mount: Deactivated successfully. Jan 14 05:45:25.784738 containerd[1620]: time="2026-01-14T05:45:25.784605966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ba9be8361f1bc5add96cbfca5c296449f8def0d081b22fe289e688266f638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:25.790597 kubelet[2825]: E0114 05:45:25.789898 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ba9be8361f1bc5add96cbfca5c296449f8def0d081b22fe289e688266f638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:25.790597 kubelet[2825]: E0114 05:45:25.789946 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ba9be8361f1bc5add96cbfca5c296449f8def0d081b22fe289e688266f638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:25.790597 kubelet[2825]: E0114 05:45:25.789963 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ba9be8361f1bc5add96cbfca5c296449f8def0d081b22fe289e688266f638\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:25.790745 kubelet[2825]: E0114 05:45:25.790006 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"072ba9be8361f1bc5add96cbfca5c296449f8def0d081b22fe289e688266f638\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:26.115010 kubelet[2825]: E0114 05:45:26.114815 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:26.117844 containerd[1620]: time="2026-01-14T05:45:26.116843677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:26.121105 containerd[1620]: time="2026-01-14T05:45:26.120826668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:26.711064 containerd[1620]: time="2026-01-14T05:45:26.711011307Z" level=error msg="Failed to destroy network for sandbox \"be4ace70473b66f8e80a329bd289138030c384118a81b339a9431e515fc389cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:26.724848 systemd[1]: run-netns-cni\x2d26ad4c32\x2da52c\x2d9f39\x2d7f98\x2d05784254ee5a.mount: Deactivated successfully. Jan 14 05:45:26.749112 containerd[1620]: time="2026-01-14T05:45:26.749076266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4ace70473b66f8e80a329bd289138030c384118a81b339a9431e515fc389cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:26.763849 kubelet[2825]: E0114 05:45:26.762914 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4ace70473b66f8e80a329bd289138030c384118a81b339a9431e515fc389cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:26.763849 kubelet[2825]: E0114 05:45:26.763111 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4ace70473b66f8e80a329bd289138030c384118a81b339a9431e515fc389cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:26.763849 kubelet[2825]: E0114 05:45:26.763138 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4ace70473b66f8e80a329bd289138030c384118a81b339a9431e515fc389cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:26.765156 kubelet[2825]: E0114 05:45:26.763625 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be4ace70473b66f8e80a329bd289138030c384118a81b339a9431e515fc389cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w9fc7" podUID="cefaac90-18ac-4910-8420-3803dde0c763" Jan 14 05:45:26.957913 containerd[1620]: time="2026-01-14T05:45:26.957852037Z" level=error msg="Failed to destroy network for sandbox \"62019c7a7588e88afa95280e66d7155a0b7114d13586e6204654d7a822999a21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:26.975014 containerd[1620]: time="2026-01-14T05:45:26.974895772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62019c7a7588e88afa95280e66d7155a0b7114d13586e6204654d7a822999a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:26.975819 systemd[1]: run-netns-cni\x2da629c3c1\x2dc246\x2d81f9\x2db2be\x2d6d4bdbf84353.mount: Deactivated successfully. Jan 14 05:45:26.980139 kubelet[2825]: E0114 05:45:26.975983 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62019c7a7588e88afa95280e66d7155a0b7114d13586e6204654d7a822999a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:26.980139 kubelet[2825]: E0114 05:45:26.976052 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62019c7a7588e88afa95280e66d7155a0b7114d13586e6204654d7a822999a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:26.980139 kubelet[2825]: E0114 05:45:26.976081 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62019c7a7588e88afa95280e66d7155a0b7114d13586e6204654d7a822999a21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:26.981705 kubelet[2825]: E0114 05:45:26.976141 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62019c7a7588e88afa95280e66d7155a0b7114d13586e6204654d7a822999a21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:45:37.187967 kubelet[2825]: E0114 05:45:37.187923 2825 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.315s" Jan 14 05:45:37.302028 kubelet[2825]: E0114 05:45:37.301787 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:37.347805 containerd[1620]: time="2026-01-14T05:45:37.346762280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:37.365813 containerd[1620]: time="2026-01-14T05:45:37.347582496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:37.372825 containerd[1620]: time="2026-01-14T05:45:37.330154061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:37.388044 containerd[1620]: time="2026-01-14T05:45:37.387868474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:37.417859 containerd[1620]: time="2026-01-14T05:45:37.417822836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:37.444995 containerd[1620]: time="2026-01-14T05:45:37.443954368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:38.309888 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 05:45:38.310078 kernel: audit: type=1325 audit(1768369538.286:559): table=filter:115 family=2 entries=21 op=nft_register_rule pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:45:38.286000 audit[4300]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:45:38.286000 audit[4300]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4b7b78b0 a2=0 a3=7fff4b7b789c items=0 ppid=2986 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:38.404902 kernel: audit: type=1300 audit(1768369538.286:559): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4b7b78b0 a2=0 a3=7fff4b7b789c items=0 ppid=2986 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:38.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:45:38.440574 kernel: audit: type=1327 audit(1768369538.286:559): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:45:38.344000 audit[4300]: NETFILTER_CFG table=nat:116 family=2 entries=19 op=nft_register_chain pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:45:38.479637 kernel: audit: type=1325 audit(1768369538.344:560): table=nat:116 family=2 entries=19 op=nft_register_chain pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:45:38.344000 audit[4300]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff4b7b78b0 a2=0 a3=7fff4b7b789c items=0 ppid=2986 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:38.544841 kernel: audit: type=1300 audit(1768369538.344:560): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff4b7b78b0 a2=0 a3=7fff4b7b789c items=0 ppid=2986 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:38.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:45:38.578774 kernel: audit: type=1327 audit(1768369538.344:560): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:45:38.855839 containerd[1620]: time="2026-01-14T05:45:38.847950744Z" level=error msg="Failed to destroy network for sandbox \"e50a27d375c27a3ff0b3ce4d82d1229cb156877c27578208d557c62298e39800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:38.853726 systemd[1]: run-netns-cni\x2d644503a9\x2d54fc\x2d0086\x2d349a\x2d24ebc5cb9270.mount: Deactivated successfully. Jan 14 05:45:38.905792 containerd[1620]: time="2026-01-14T05:45:38.905030744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50a27d375c27a3ff0b3ce4d82d1229cb156877c27578208d557c62298e39800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:38.908730 kubelet[2825]: E0114 05:45:38.908138 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50a27d375c27a3ff0b3ce4d82d1229cb156877c27578208d557c62298e39800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:38.908730 kubelet[2825]: E0114 05:45:38.908623 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50a27d375c27a3ff0b3ce4d82d1229cb156877c27578208d557c62298e39800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:38.909670 kubelet[2825]: E0114 05:45:38.908647 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50a27d375c27a3ff0b3ce4d82d1229cb156877c27578208d557c62298e39800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:38.909670 kubelet[2825]: E0114 05:45:38.908934 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e50a27d375c27a3ff0b3ce4d82d1229cb156877c27578208d557c62298e39800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55766fbd54-svfq6" podUID="8b419574-ee17-4c47-bc9c-99544ac25d88" Jan 14 05:45:39.057411 containerd[1620]: time="2026-01-14T05:45:39.056926805Z" level=error msg="Failed to destroy network for sandbox \"dfb950a0a55ec4f132a5837096ec8865915b6b4bb34f10b58831dbe0b3e8a42d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.060945 containerd[1620]: time="2026-01-14T05:45:39.059878855Z" level=error msg="Failed to destroy network for sandbox \"b2d515c412feaf788b25a5db2d7ff7c7b559dce661ca23d9a9661ffaab4fb066\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.081871 systemd[1]: run-netns-cni\x2d5bb60b83\x2d54f0\x2d49df\x2dd604\x2d6f3b4d2f3afb.mount: Deactivated successfully. Jan 14 05:45:39.083874 systemd[1]: run-netns-cni\x2d9f582edf\x2d61a6\x2d0585\x2d60e2\x2d1bb551b0b68e.mount: Deactivated successfully. Jan 14 05:45:39.119143 kubelet[2825]: E0114 05:45:39.118896 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:39.123916 containerd[1620]: time="2026-01-14T05:45:39.123624760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:39.131656 containerd[1620]: time="2026-01-14T05:45:39.131100313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d515c412feaf788b25a5db2d7ff7c7b559dce661ca23d9a9661ffaab4fb066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.133910 kubelet[2825]: E0114 05:45:39.132891 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d515c412feaf788b25a5db2d7ff7c7b559dce661ca23d9a9661ffaab4fb066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.133910 kubelet[2825]: E0114 05:45:39.133670 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d515c412feaf788b25a5db2d7ff7c7b559dce661ca23d9a9661ffaab4fb066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:39.133910 kubelet[2825]: E0114 05:45:39.133697 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d515c412feaf788b25a5db2d7ff7c7b559dce661ca23d9a9661ffaab4fb066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:39.134936 kubelet[2825]: E0114 05:45:39.133758 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d515c412feaf788b25a5db2d7ff7c7b559dce661ca23d9a9661ffaab4fb066\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:45:39.145945 containerd[1620]: time="2026-01-14T05:45:39.135784813Z" level=error msg="Failed to destroy network for sandbox \"6d6963b8f8a55b827bda8e500f9084044e913c80247d1cf5453cc5bb1aae9b68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.145100 systemd[1]: run-netns-cni\x2d497376ac\x2d4220\x2db726\x2d8a80\x2d3ea08a3521b7.mount: Deactivated successfully. Jan 14 05:45:39.204969 containerd[1620]: time="2026-01-14T05:45:39.200997005Z" level=error msg="Failed to destroy network for sandbox \"8c99ac815a8804e89c495518aa68ca270a838494c68babc27088b37fd9244f9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.213646 containerd[1620]: time="2026-01-14T05:45:39.212143532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6963b8f8a55b827bda8e500f9084044e913c80247d1cf5453cc5bb1aae9b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.220994 kubelet[2825]: E0114 05:45:39.215956 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6963b8f8a55b827bda8e500f9084044e913c80247d1cf5453cc5bb1aae9b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.226640 kubelet[2825]: E0114 05:45:39.223985 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6963b8f8a55b827bda8e500f9084044e913c80247d1cf5453cc5bb1aae9b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:39.230580 containerd[1620]: time="2026-01-14T05:45:39.230141377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb950a0a55ec4f132a5837096ec8865915b6b4bb34f10b58831dbe0b3e8a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.233883 kubelet[2825]: E0114 05:45:39.233586 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb950a0a55ec4f132a5837096ec8865915b6b4bb34f10b58831dbe0b3e8a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.234636 kubelet[2825]: E0114 05:45:39.234137 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb950a0a55ec4f132a5837096ec8865915b6b4bb34f10b58831dbe0b3e8a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:39.234736 kubelet[2825]: E0114 05:45:39.234713 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb950a0a55ec4f132a5837096ec8865915b6b4bb34f10b58831dbe0b3e8a42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:39.235979 kubelet[2825]: E0114 05:45:39.234989 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb950a0a55ec4f132a5837096ec8865915b6b4bb34f10b58831dbe0b3e8a42d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:45:39.241044 kubelet[2825]: E0114 05:45:39.232012 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6963b8f8a55b827bda8e500f9084044e913c80247d1cf5453cc5bb1aae9b68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:39.242640 kubelet[2825]: E0114 05:45:39.241751 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d6963b8f8a55b827bda8e500f9084044e913c80247d1cf5453cc5bb1aae9b68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:39.267926 containerd[1620]: time="2026-01-14T05:45:39.267877422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99ac815a8804e89c495518aa68ca270a838494c68babc27088b37fd9244f9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.273975 kubelet[2825]: E0114 05:45:39.273000 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99ac815a8804e89c495518aa68ca270a838494c68babc27088b37fd9244f9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.275944 kubelet[2825]: E0114 05:45:39.275128 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99ac815a8804e89c495518aa68ca270a838494c68babc27088b37fd9244f9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:39.276744 kubelet[2825]: E0114 05:45:39.276381 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99ac815a8804e89c495518aa68ca270a838494c68babc27088b37fd9244f9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:39.285837 kubelet[2825]: E0114 05:45:39.285028 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c99ac815a8804e89c495518aa68ca270a838494c68babc27088b37fd9244f9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:45:39.295034 containerd[1620]: time="2026-01-14T05:45:39.292948049Z" level=error msg="Failed to destroy network for sandbox \"e724b9e75b5f1d164cd2d54030f82cda61cbe027dd09b17778865dad82d4c359\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.310114 containerd[1620]: time="2026-01-14T05:45:39.308866702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e724b9e75b5f1d164cd2d54030f82cda61cbe027dd09b17778865dad82d4c359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.311412 kubelet[2825]: E0114 05:45:39.310986 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e724b9e75b5f1d164cd2d54030f82cda61cbe027dd09b17778865dad82d4c359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.311412 kubelet[2825]: E0114 05:45:39.311058 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e724b9e75b5f1d164cd2d54030f82cda61cbe027dd09b17778865dad82d4c359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:39.313888 kubelet[2825]: E0114 05:45:39.313132 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e724b9e75b5f1d164cd2d54030f82cda61cbe027dd09b17778865dad82d4c359\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:39.319723 kubelet[2825]: E0114 05:45:39.319598 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e724b9e75b5f1d164cd2d54030f82cda61cbe027dd09b17778865dad82d4c359\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:45:39.445108 systemd[1]: run-netns-cni\x2d137983ec\x2d179d\x2df704\x2ddc04\x2dc2a225d3fc74.mount: Deactivated successfully. Jan 14 05:45:39.445663 systemd[1]: run-netns-cni\x2d46c51132\x2d45ad\x2d1846\x2da0d4\x2d9ebd806eb1f6.mount: Deactivated successfully. Jan 14 05:45:39.750831 containerd[1620]: time="2026-01-14T05:45:39.750118803Z" level=error msg="Failed to destroy network for sandbox \"2f085672f2da399435078f5b07f6b9e8e50db0f4251acd098e7b140e02a55ed3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.773059 containerd[1620]: time="2026-01-14T05:45:39.773008111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f085672f2da399435078f5b07f6b9e8e50db0f4251acd098e7b140e02a55ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.774125 systemd[1]: run-netns-cni\x2dff099a52\x2d06d6\x2d7dcd\x2d907e\x2db0f459b0a4bd.mount: Deactivated successfully. Jan 14 05:45:39.782942 kubelet[2825]: E0114 05:45:39.780740 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f085672f2da399435078f5b07f6b9e8e50db0f4251acd098e7b140e02a55ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:39.782942 kubelet[2825]: E0114 05:45:39.780944 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f085672f2da399435078f5b07f6b9e8e50db0f4251acd098e7b140e02a55ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:39.782942 kubelet[2825]: E0114 05:45:39.780971 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f085672f2da399435078f5b07f6b9e8e50db0f4251acd098e7b140e02a55ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:39.783153 kubelet[2825]: E0114 05:45:39.781449 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f085672f2da399435078f5b07f6b9e8e50db0f4251acd098e7b140e02a55ed3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5h66t" podUID="e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156" Jan 14 05:45:40.124995 kubelet[2825]: E0114 05:45:40.122688 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:40.126109 containerd[1620]: time="2026-01-14T05:45:40.125848715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:40.546768 containerd[1620]: time="2026-01-14T05:45:40.546704533Z" level=error msg="Failed to destroy network for sandbox \"0882adc8534e6749319610e4ca7bbd2176ceb6551e83f434a30c273a98bf9952\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:40.555854 systemd[1]: run-netns-cni\x2d72c1dbda\x2dbe1f\x2dad52\x2d86d0\x2d58ad924d3f01.mount: Deactivated successfully. Jan 14 05:45:41.155712 containerd[1620]: time="2026-01-14T05:45:41.153648396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0882adc8534e6749319610e4ca7bbd2176ceb6551e83f434a30c273a98bf9952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:41.158647 kubelet[2825]: E0114 05:45:41.155694 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0882adc8534e6749319610e4ca7bbd2176ceb6551e83f434a30c273a98bf9952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:41.158647 kubelet[2825]: E0114 05:45:41.155751 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0882adc8534e6749319610e4ca7bbd2176ceb6551e83f434a30c273a98bf9952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:41.158647 kubelet[2825]: E0114 05:45:41.155768 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0882adc8534e6749319610e4ca7bbd2176ceb6551e83f434a30c273a98bf9952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:41.159035 kubelet[2825]: E0114 05:45:41.157631 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0882adc8534e6749319610e4ca7bbd2176ceb6551e83f434a30c273a98bf9952\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w9fc7" podUID="cefaac90-18ac-4910-8420-3803dde0c763" Jan 14 05:45:41.165100 containerd[1620]: time="2026-01-14T05:45:41.163732665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:41.755466 containerd[1620]: time="2026-01-14T05:45:41.749376513Z" level=error msg="Failed to destroy network for sandbox \"e95a19a63d6c698f4c0057280b197dae67208b0506c74bbd275ceec6dbb537ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:41.763644 systemd[1]: run-netns-cni\x2d62f8fda2\x2d03f9\x2d2eab\x2de9ec\x2daa0c86c477ac.mount: Deactivated successfully. Jan 14 05:45:41.786470 containerd[1620]: time="2026-01-14T05:45:41.786401319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95a19a63d6c698f4c0057280b197dae67208b0506c74bbd275ceec6dbb537ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:41.789657 kubelet[2825]: E0114 05:45:41.788140 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95a19a63d6c698f4c0057280b197dae67208b0506c74bbd275ceec6dbb537ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:41.791445 kubelet[2825]: E0114 05:45:41.789791 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95a19a63d6c698f4c0057280b197dae67208b0506c74bbd275ceec6dbb537ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:41.791445 kubelet[2825]: E0114 05:45:41.789830 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95a19a63d6c698f4c0057280b197dae67208b0506c74bbd275ceec6dbb537ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:41.791769 kubelet[2825]: E0114 05:45:41.791032 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95a19a63d6c698f4c0057280b197dae67208b0506c74bbd275ceec6dbb537ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:45:51.165100 kubelet[2825]: E0114 05:45:51.165051 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:51.169114 containerd[1620]: time="2026-01-14T05:45:51.168625609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:51.180085 containerd[1620]: time="2026-01-14T05:45:51.174141496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:51.181487 containerd[1620]: time="2026-01-14T05:45:51.175059415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:51.683838 containerd[1620]: time="2026-01-14T05:45:51.683428828Z" level=error msg="Failed to destroy network for sandbox \"51a164c7e0433351af515e2500c7ff4d03c40c279dd6a6cc8063403d7a3a3c7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.691940 systemd[1]: run-netns-cni\x2da22a2556\x2d93fe\x2d6063\x2d98de\x2d2712ce004dfd.mount: Deactivated successfully. Jan 14 05:45:51.702476 containerd[1620]: time="2026-01-14T05:45:51.702443813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a164c7e0433351af515e2500c7ff4d03c40c279dd6a6cc8063403d7a3a3c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.704958 kubelet[2825]: E0114 05:45:51.703643 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a164c7e0433351af515e2500c7ff4d03c40c279dd6a6cc8063403d7a3a3c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.704958 kubelet[2825]: E0114 05:45:51.703866 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a164c7e0433351af515e2500c7ff4d03c40c279dd6a6cc8063403d7a3a3c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:51.704958 kubelet[2825]: E0114 05:45:51.703889 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a164c7e0433351af515e2500c7ff4d03c40c279dd6a6cc8063403d7a3a3c7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" Jan 14 05:45:51.705456 kubelet[2825]: E0114 05:45:51.703947 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51a164c7e0433351af515e2500c7ff4d03c40c279dd6a6cc8063403d7a3a3c7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:45:51.745660 containerd[1620]: time="2026-01-14T05:45:51.745596697Z" level=error msg="Failed to destroy network for sandbox \"ea634353013ba86d794d094aa1eae45cbcd7e1540b04e17f82a7e1c4b232179b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.752016 systemd[1]: run-netns-cni\x2d25a3950b\x2dc278\x2d4cbf\x2dbe74\x2d6b7a9b624993.mount: Deactivated successfully. Jan 14 05:45:51.774073 containerd[1620]: time="2026-01-14T05:45:51.773868226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea634353013ba86d794d094aa1eae45cbcd7e1540b04e17f82a7e1c4b232179b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.777436 kubelet[2825]: E0114 05:45:51.775989 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea634353013ba86d794d094aa1eae45cbcd7e1540b04e17f82a7e1c4b232179b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.777436 kubelet[2825]: E0114 05:45:51.776046 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea634353013ba86d794d094aa1eae45cbcd7e1540b04e17f82a7e1c4b232179b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:51.777436 kubelet[2825]: E0114 05:45:51.776540 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea634353013ba86d794d094aa1eae45cbcd7e1540b04e17f82a7e1c4b232179b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5h66t" Jan 14 05:45:51.778556 kubelet[2825]: E0114 05:45:51.778522 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5h66t_kube-system(e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea634353013ba86d794d094aa1eae45cbcd7e1540b04e17f82a7e1c4b232179b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5h66t" podUID="e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156" Jan 14 05:45:51.794120 containerd[1620]: time="2026-01-14T05:45:51.793607629Z" level=error msg="Failed to destroy network for sandbox \"722e493462089e6d503e4476acd27a4c8833cabd48964c93a6a9e20cc20d59de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.810849 containerd[1620]: time="2026-01-14T05:45:51.810642683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"722e493462089e6d503e4476acd27a4c8833cabd48964c93a6a9e20cc20d59de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.815677 kubelet[2825]: E0114 05:45:51.815641 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722e493462089e6d503e4476acd27a4c8833cabd48964c93a6a9e20cc20d59de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:51.815936 kubelet[2825]: E0114 05:45:51.815920 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722e493462089e6d503e4476acd27a4c8833cabd48964c93a6a9e20cc20d59de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:51.816020 kubelet[2825]: E0114 05:45:51.816005 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722e493462089e6d503e4476acd27a4c8833cabd48964c93a6a9e20cc20d59de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46w2k" Jan 14 05:45:51.821077 kubelet[2825]: E0114 05:45:51.820505 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"722e493462089e6d503e4476acd27a4c8833cabd48964c93a6a9e20cc20d59de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:45:52.109386 containerd[1620]: time="2026-01-14T05:45:52.108889776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:52.120863 containerd[1620]: time="2026-01-14T05:45:52.120685624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:52.211968 systemd[1]: run-netns-cni\x2d2d914d73\x2d0db3\x2da012\x2d5186\x2dac6ce0909f17.mount: Deactivated successfully. Jan 14 05:45:52.470581 containerd[1620]: time="2026-01-14T05:45:52.469654075Z" level=error msg="Failed to destroy network for sandbox \"11395c0ba85133ba554d953c1209c9a49053213f99041e132d076d8dec87cf39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:52.479930 systemd[1]: run-netns-cni\x2d90a8506e\x2d23f2\x2da3f3\x2da824\x2dac7d06915789.mount: Deactivated successfully. Jan 14 05:45:52.492077 containerd[1620]: time="2026-01-14T05:45:52.491914096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11395c0ba85133ba554d953c1209c9a49053213f99041e132d076d8dec87cf39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:52.494509 kubelet[2825]: E0114 05:45:52.494444 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11395c0ba85133ba554d953c1209c9a49053213f99041e132d076d8dec87cf39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:52.496558 kubelet[2825]: E0114 05:45:52.495864 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11395c0ba85133ba554d953c1209c9a49053213f99041e132d076d8dec87cf39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:52.496558 kubelet[2825]: E0114 05:45:52.496026 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11395c0ba85133ba554d953c1209c9a49053213f99041e132d076d8dec87cf39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" Jan 14 05:45:52.496558 kubelet[2825]: E0114 05:45:52.496096 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11395c0ba85133ba554d953c1209c9a49053213f99041e132d076d8dec87cf39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:45:52.595705 containerd[1620]: time="2026-01-14T05:45:52.593601374Z" level=error msg="Failed to destroy network for sandbox \"aac024828b131467c0671d0a3a19825915b5801e8479c7045111065f48303648\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:52.600963 systemd[1]: run-netns-cni\x2da017a585\x2dda88\x2db4e4\x2d6793\x2da59fddc40702.mount: Deactivated successfully. Jan 14 05:45:52.622104 containerd[1620]: time="2026-01-14T05:45:52.622062811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac024828b131467c0671d0a3a19825915b5801e8479c7045111065f48303648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:52.623914 kubelet[2825]: E0114 05:45:52.623858 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac024828b131467c0671d0a3a19825915b5801e8479c7045111065f48303648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:52.624067 kubelet[2825]: E0114 05:45:52.624041 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac024828b131467c0671d0a3a19825915b5801e8479c7045111065f48303648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:52.624141 kubelet[2825]: E0114 05:45:52.624126 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac024828b131467c0671d0a3a19825915b5801e8479c7045111065f48303648\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h4bdc" Jan 14 05:45:52.625131 kubelet[2825]: E0114 05:45:52.624607 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aac024828b131467c0671d0a3a19825915b5801e8479c7045111065f48303648\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:45:53.113055 kubelet[2825]: E0114 05:45:53.111682 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:53.113870 containerd[1620]: time="2026-01-14T05:45:53.113692453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,}" Jan 14 05:45:53.137459 containerd[1620]: time="2026-01-14T05:45:53.137431034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:53.146933 containerd[1620]: time="2026-01-14T05:45:53.146865881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:45:53.533123 containerd[1620]: time="2026-01-14T05:45:53.533073693Z" level=error msg="Failed to destroy network for sandbox \"9841ad615026d87f53e0f408497fae4394bebf2b9fb31b98ec5a3af462567a49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.543134 systemd[1]: run-netns-cni\x2dd868e763\x2d105d\x2db0a4\x2d2719\x2dc288aec44ac5.mount: Deactivated successfully. Jan 14 05:45:53.552659 containerd[1620]: time="2026-01-14T05:45:53.549875236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9841ad615026d87f53e0f408497fae4394bebf2b9fb31b98ec5a3af462567a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.553105 kubelet[2825]: E0114 05:45:53.550912 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9841ad615026d87f53e0f408497fae4394bebf2b9fb31b98ec5a3af462567a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.553105 kubelet[2825]: E0114 05:45:53.550983 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9841ad615026d87f53e0f408497fae4394bebf2b9fb31b98ec5a3af462567a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:53.553105 kubelet[2825]: E0114 05:45:53.551007 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9841ad615026d87f53e0f408497fae4394bebf2b9fb31b98ec5a3af462567a49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w9fc7" Jan 14 05:45:53.553664 kubelet[2825]: E0114 05:45:53.551073 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w9fc7_kube-system(cefaac90-18ac-4910-8420-3803dde0c763)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9841ad615026d87f53e0f408497fae4394bebf2b9fb31b98ec5a3af462567a49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w9fc7" podUID="cefaac90-18ac-4910-8420-3803dde0c763" Jan 14 05:45:53.564111 containerd[1620]: time="2026-01-14T05:45:53.564075562Z" level=error msg="Failed to destroy network for sandbox \"faa673a6bae7cef3b741cdcc6a032140abdb20400386b423eafa2e76d3205632\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.575625 systemd[1]: run-netns-cni\x2d88470980\x2d6a9b\x2d43e6\x2d1538\x2d02af4d64cfaf.mount: Deactivated successfully. Jan 14 05:45:53.603434 containerd[1620]: time="2026-01-14T05:45:53.602642262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa673a6bae7cef3b741cdcc6a032140abdb20400386b423eafa2e76d3205632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.606111 kubelet[2825]: E0114 05:45:53.605544 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa673a6bae7cef3b741cdcc6a032140abdb20400386b423eafa2e76d3205632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.606111 kubelet[2825]: E0114 05:45:53.605625 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa673a6bae7cef3b741cdcc6a032140abdb20400386b423eafa2e76d3205632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:53.606111 kubelet[2825]: E0114 05:45:53.605652 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa673a6bae7cef3b741cdcc6a032140abdb20400386b423eafa2e76d3205632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" Jan 14 05:45:53.607561 kubelet[2825]: E0114 05:45:53.605716 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faa673a6bae7cef3b741cdcc6a032140abdb20400386b423eafa2e76d3205632\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:45:53.719549 containerd[1620]: time="2026-01-14T05:45:53.719498115Z" level=error msg="Failed to destroy network for sandbox \"408fa2e70cf6a2cebac98b1781af741553bd82a724dedaaab1a4119fdffd974b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.730546 systemd[1]: run-netns-cni\x2df6968f90\x2d0a14\x2d8794\x2d7b2e\x2d33af67ad639a.mount: Deactivated successfully. Jan 14 05:45:53.770668 containerd[1620]: time="2026-01-14T05:45:53.770604291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"408fa2e70cf6a2cebac98b1781af741553bd82a724dedaaab1a4119fdffd974b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.773925 kubelet[2825]: E0114 05:45:53.772118 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"408fa2e70cf6a2cebac98b1781af741553bd82a724dedaaab1a4119fdffd974b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:53.773925 kubelet[2825]: E0114 05:45:53.772579 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"408fa2e70cf6a2cebac98b1781af741553bd82a724dedaaab1a4119fdffd974b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:53.773925 kubelet[2825]: E0114 05:45:53.772605 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"408fa2e70cf6a2cebac98b1781af741553bd82a724dedaaab1a4119fdffd974b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" Jan 14 05:45:53.774088 kubelet[2825]: E0114 05:45:53.772963 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"408fa2e70cf6a2cebac98b1781af741553bd82a724dedaaab1a4119fdffd974b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:45:54.103543 kubelet[2825]: E0114 05:45:54.103432 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:54.117980 containerd[1620]: time="2026-01-14T05:45:54.117911764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,}" Jan 14 05:45:54.212481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635038695.mount: Deactivated successfully. Jan 14 05:45:54.302605 containerd[1620]: time="2026-01-14T05:45:54.301952334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:54.304364 containerd[1620]: time="2026-01-14T05:45:54.304336295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 05:45:54.316428 containerd[1620]: time="2026-01-14T05:45:54.316122167Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:54.322043 containerd[1620]: time="2026-01-14T05:45:54.322002566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 05:45:54.325983 containerd[1620]: time="2026-01-14T05:45:54.325945125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 43.94147602s" Jan 14 05:45:54.326140 containerd[1620]: time="2026-01-14T05:45:54.326119942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 05:45:54.409461 containerd[1620]: time="2026-01-14T05:45:54.408679891Z" level=info msg="CreateContainer within sandbox \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 05:45:54.484667 containerd[1620]: time="2026-01-14T05:45:54.484007070Z" level=info msg="Container e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:45:54.485124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564137539.mount: Deactivated successfully. Jan 14 05:45:54.527996 containerd[1620]: time="2026-01-14T05:45:54.527439621Z" level=info msg="CreateContainer within sandbox \"91893bc284ee48de678f9cab15a7823ca9a74dc527b4425406f5cd97781170cc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae\"" Jan 14 05:45:54.533552 containerd[1620]: time="2026-01-14T05:45:54.533517712Z" level=info msg="StartContainer for \"e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae\"" Jan 14 05:45:54.554011 containerd[1620]: time="2026-01-14T05:45:54.553950158Z" level=error msg="Failed to destroy network for sandbox \"8b22edadb286fb40008c07529fcb880558825f0ed6d44a6a77b02ac0aa2db2f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:54.556114 containerd[1620]: time="2026-01-14T05:45:54.555721077Z" level=info msg="connecting to shim e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae" address="unix:///run/containerd/s/09c0563195b1d67be1a967f7153904192fa272382dddeae1267081b8ffb3d47a" protocol=ttrpc version=3 Jan 14 05:45:54.568888 containerd[1620]: time="2026-01-14T05:45:54.568709431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55766fbd54-svfq6,Uid:8b419574-ee17-4c47-bc9c-99544ac25d88,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b22edadb286fb40008c07529fcb880558825f0ed6d44a6a77b02ac0aa2db2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:54.570590 kubelet[2825]: E0114 05:45:54.570537 2825 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b22edadb286fb40008c07529fcb880558825f0ed6d44a6a77b02ac0aa2db2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 05:45:54.573504 kubelet[2825]: E0114 05:45:54.571461 2825 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b22edadb286fb40008c07529fcb880558825f0ed6d44a6a77b02ac0aa2db2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:54.573504 kubelet[2825]: E0114 05:45:54.571515 2825 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b22edadb286fb40008c07529fcb880558825f0ed6d44a6a77b02ac0aa2db2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55766fbd54-svfq6" Jan 14 05:45:54.573504 kubelet[2825]: E0114 05:45:54.571590 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55766fbd54-svfq6_calico-system(8b419574-ee17-4c47-bc9c-99544ac25d88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b22edadb286fb40008c07529fcb880558825f0ed6d44a6a77b02ac0aa2db2f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55766fbd54-svfq6" podUID="8b419574-ee17-4c47-bc9c-99544ac25d88" Jan 14 05:45:54.959903 systemd[1]: Started cri-containerd-e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae.scope - libcontainer container e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae. Jan 14 05:45:55.147000 audit: BPF prog-id=170 op=LOAD Jan 14 05:45:55.147000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.208120 systemd[1]: run-netns-cni\x2d6b7b7f05\x2dcaaf\x2ddd0f\x2d0382\x2db69d68280554.mount: Deactivated successfully. Jan 14 05:45:55.212687 kernel: audit: type=1334 audit(1768369555.147:561): prog-id=170 op=LOAD Jan 14 05:45:55.212918 kernel: audit: type=1300 audit(1768369555.147:561): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.156000 audit: BPF prog-id=171 op=LOAD Jan 14 05:45:55.282956 kernel: audit: type=1327 audit(1768369555.147:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.283051 kernel: audit: type=1334 audit(1768369555.156:562): prog-id=171 op=LOAD Jan 14 05:45:55.283092 kernel: audit: type=1300 audit(1768369555.156:562): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.156000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.338641 kernel: audit: type=1327 audit(1768369555.156:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.390647 kernel: audit: type=1334 audit(1768369555.156:563): prog-id=171 op=UNLOAD Jan 14 05:45:55.156000 audit: BPF prog-id=171 op=UNLOAD Jan 14 05:45:55.156000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.457718 kernel: audit: type=1300 audit(1768369555.156:563): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.457995 kernel: audit: type=1327 audit(1768369555.156:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.521146 kernel: audit: type=1334 audit(1768369555.156:564): prog-id=170 op=UNLOAD Jan 14 05:45:55.156000 audit: BPF prog-id=170 op=UNLOAD Jan 14 05:45:55.156000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.156000 audit: BPF prog-id=172 op=LOAD Jan 14 05:45:55.156000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3359 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:45:55.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383833633533376536363339313661303833323762633934306238 Jan 14 05:45:55.546972 containerd[1620]: time="2026-01-14T05:45:55.546489332Z" level=info msg="StartContainer for \"e1883c537e663916a08327bc940b8a0b71a654e659f48917507de7de9be17bae\" returns successfully" Jan 14 05:45:56.271118 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 05:45:56.271555 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 05:45:56.494535 kubelet[2825]: E0114 05:45:56.493970 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:57.021620 kubelet[2825]: I0114 05:45:57.021508 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6djrl" podStartSLOduration=4.294562818 podStartE2EDuration="58.021487251s" podCreationTimestamp="2026-01-14 05:44:59 +0000 UTC" firstStartedPulling="2026-01-14 05:45:00.6036317 +0000 UTC m=+22.788111221" lastFinishedPulling="2026-01-14 05:45:54.330556133 +0000 UTC m=+76.515035654" observedRunningTime="2026-01-14 05:45:57.011985919 +0000 UTC m=+79.196465440" watchObservedRunningTime="2026-01-14 05:45:57.021487251 +0000 UTC m=+79.205966772" Jan 14 05:45:57.508424 kubelet[2825]: E0114 05:45:57.507948 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:45:57.872583 kubelet[2825]: I0114 05:45:57.871450 2825 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv5z8\" (UniqueName: \"kubernetes.io/projected/8b419574-ee17-4c47-bc9c-99544ac25d88-kube-api-access-cv5z8\") pod \"8b419574-ee17-4c47-bc9c-99544ac25d88\" (UID: \"8b419574-ee17-4c47-bc9c-99544ac25d88\") " Jan 14 05:45:57.872583 kubelet[2825]: I0114 05:45:57.871659 2825 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-ca-bundle\") pod \"8b419574-ee17-4c47-bc9c-99544ac25d88\" (UID: \"8b419574-ee17-4c47-bc9c-99544ac25d88\") " Jan 14 05:45:57.872583 kubelet[2825]: I0114 05:45:57.871698 2825 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-backend-key-pair\") pod \"8b419574-ee17-4c47-bc9c-99544ac25d88\" (UID: \"8b419574-ee17-4c47-bc9c-99544ac25d88\") " Jan 14 05:45:57.874639 kubelet[2825]: I0114 05:45:57.874602 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b419574-ee17-4c47-bc9c-99544ac25d88" (UID: "8b419574-ee17-4c47-bc9c-99544ac25d88"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 05:45:57.906111 kubelet[2825]: I0114 05:45:57.901583 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b419574-ee17-4c47-bc9c-99544ac25d88" (UID: "8b419574-ee17-4c47-bc9c-99544ac25d88"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 05:45:57.902632 systemd[1]: var-lib-kubelet-pods-8b419574\x2dee17\x2d4c47\x2dbc9c\x2d99544ac25d88-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 05:45:57.925064 kubelet[2825]: I0114 05:45:57.924052 2825 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b419574-ee17-4c47-bc9c-99544ac25d88-kube-api-access-cv5z8" (OuterVolumeSpecName: "kube-api-access-cv5z8") pod "8b419574-ee17-4c47-bc9c-99544ac25d88" (UID: "8b419574-ee17-4c47-bc9c-99544ac25d88"). InnerVolumeSpecName "kube-api-access-cv5z8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 05:45:57.928135 systemd[1]: var-lib-kubelet-pods-8b419574\x2dee17\x2d4c47\x2dbc9c\x2d99544ac25d88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcv5z8.mount: Deactivated successfully. Jan 14 05:45:57.974977 kubelet[2825]: I0114 05:45:57.974788 2825 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 05:45:57.976419 kubelet[2825]: I0114 05:45:57.975454 2825 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b419574-ee17-4c47-bc9c-99544ac25d88-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 05:45:57.976419 kubelet[2825]: I0114 05:45:57.975473 2825 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cv5z8\" (UniqueName: \"kubernetes.io/projected/8b419574-ee17-4c47-bc9c-99544ac25d88-kube-api-access-cv5z8\") on node \"localhost\" DevicePath \"\"" Jan 14 05:45:58.155785 systemd[1]: Removed slice kubepods-besteffort-pod8b419574_ee17_4c47_bc9c_99544ac25d88.slice - libcontainer container kubepods-besteffort-pod8b419574_ee17_4c47_bc9c_99544ac25d88.slice. Jan 14 05:45:58.535399 kubelet[2825]: E0114 05:45:58.533991 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:00.701760 systemd[1]: Created slice kubepods-besteffort-poda883e1fb_a961_4974_a5a0_9481f730a55a.slice - libcontainer container kubepods-besteffort-poda883e1fb_a961_4974_a5a0_9481f730a55a.slice. Jan 14 05:46:00.739721 kubelet[2825]: I0114 05:46:00.739089 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wj4r\" (UniqueName: \"kubernetes.io/projected/a883e1fb-a961-4974-a5a0-9481f730a55a-kube-api-access-4wj4r\") pod \"whisker-9f97fd46d-kvn2d\" (UID: \"a883e1fb-a961-4974-a5a0-9481f730a55a\") " pod="calico-system/whisker-9f97fd46d-kvn2d" Jan 14 05:46:00.739721 kubelet[2825]: I0114 05:46:00.739493 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a883e1fb-a961-4974-a5a0-9481f730a55a-whisker-backend-key-pair\") pod \"whisker-9f97fd46d-kvn2d\" (UID: \"a883e1fb-a961-4974-a5a0-9481f730a55a\") " pod="calico-system/whisker-9f97fd46d-kvn2d" Jan 14 05:46:00.739721 kubelet[2825]: I0114 05:46:00.739530 2825 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a883e1fb-a961-4974-a5a0-9481f730a55a-whisker-ca-bundle\") pod \"whisker-9f97fd46d-kvn2d\" (UID: \"a883e1fb-a961-4974-a5a0-9481f730a55a\") " pod="calico-system/whisker-9f97fd46d-kvn2d" Jan 14 05:46:01.032813 containerd[1620]: time="2026-01-14T05:46:01.031677991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9f97fd46d-kvn2d,Uid:a883e1fb-a961-4974-a5a0-9481f730a55a,Namespace:calico-system,Attempt:0,}" Jan 14 05:46:02.109731 kubelet[2825]: I0114 05:46:02.109510 2825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b419574-ee17-4c47-bc9c-99544ac25d88" path="/var/lib/kubelet/pods/8b419574-ee17-4c47-bc9c-99544ac25d88/volumes" Jan 14 05:46:02.440784 systemd-networkd[1505]: cali5df88f9f192: Link UP Jan 14 05:46:02.447562 systemd-networkd[1505]: cali5df88f9f192: Gained carrier Jan 14 05:46:02.571674 containerd[1620]: 2026-01-14 05:46:01.235 [INFO][4938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 05:46:02.571674 containerd[1620]: 2026-01-14 05:46:01.377 [INFO][4938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--9f97fd46d--kvn2d-eth0 whisker-9f97fd46d- calico-system a883e1fb-a961-4974-a5a0-9481f730a55a 1035 0 2026-01-14 05:46:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9f97fd46d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-9f97fd46d-kvn2d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5df88f9f192 [] [] }} ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-" Jan 14 05:46:02.571674 containerd[1620]: 2026-01-14 05:46:01.378 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:02.571674 containerd[1620]: 2026-01-14 05:46:01.936 [INFO][4957] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" HandleID="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Workload="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:01.942 [INFO][4957] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" HandleID="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Workload="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e14f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-9f97fd46d-kvn2d", "timestamp":"2026-01-14 05:46:01.936118102 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:01.942 [INFO][4957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:01.942 [INFO][4957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:01.944 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:02.051 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" host="localhost" Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:02.096 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:02.129 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:02.144 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:02.163 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:02.580125 containerd[1620]: 2026-01-14 05:46:02.164 [INFO][4957] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" host="localhost" Jan 14 05:46:02.581791 containerd[1620]: 2026-01-14 05:46:02.179 [INFO][4957] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec Jan 14 05:46:02.581791 containerd[1620]: 2026-01-14 05:46:02.207 [INFO][4957] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" host="localhost" Jan 14 05:46:02.581791 containerd[1620]: 2026-01-14 05:46:02.234 [INFO][4957] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" host="localhost" Jan 14 05:46:02.581791 containerd[1620]: 2026-01-14 05:46:02.236 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" host="localhost" Jan 14 05:46:02.581791 containerd[1620]: 2026-01-14 05:46:02.236 [INFO][4957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:02.581791 containerd[1620]: 2026-01-14 05:46:02.236 [INFO][4957] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" HandleID="k8s-pod-network.f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Workload="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:02.588536 containerd[1620]: 2026-01-14 05:46:02.262 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9f97fd46d--kvn2d-eth0", GenerateName:"whisker-9f97fd46d-", Namespace:"calico-system", SelfLink:"", UID:"a883e1fb-a961-4974-a5a0-9481f730a55a", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9f97fd46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-9f97fd46d-kvn2d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5df88f9f192", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:02.588536 containerd[1620]: 2026-01-14 05:46:02.267 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:02.589135 containerd[1620]: 2026-01-14 05:46:02.268 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5df88f9f192 ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:02.589135 containerd[1620]: 2026-01-14 05:46:02.460 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:02.596503 containerd[1620]: 2026-01-14 05:46:02.464 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9f97fd46d--kvn2d-eth0", GenerateName:"whisker-9f97fd46d-", Namespace:"calico-system", SelfLink:"", UID:"a883e1fb-a961-4974-a5a0-9481f730a55a", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 46, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9f97fd46d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec", Pod:"whisker-9f97fd46d-kvn2d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5df88f9f192", MAC:"96:c9:f3:15:79:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:02.596829 containerd[1620]: 2026-01-14 05:46:02.518 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" Namespace="calico-system" Pod="whisker-9f97fd46d-kvn2d" WorkloadEndpoint="localhost-k8s-whisker--9f97fd46d--kvn2d-eth0" Jan 14 05:46:03.074477 containerd[1620]: time="2026-01-14T05:46:03.047568023Z" level=info msg="connecting to shim f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec" address="unix:///run/containerd/s/c3d34f2c73190f96e6359111cc6a250370f41c6a888be27d88f0deee36f456a3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:03.108588 kubelet[2825]: E0114 05:46:03.107884 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:03.303436 systemd[1]: Started cri-containerd-f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec.scope - libcontainer container f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec. Jan 14 05:46:03.442000 audit: BPF prog-id=173 op=LOAD Jan 14 05:46:03.454839 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 05:46:03.455095 kernel: audit: type=1334 audit(1768369563.442:566): prog-id=173 op=LOAD Jan 14 05:46:03.486770 kernel: audit: type=1334 audit(1768369563.445:567): prog-id=174 op=LOAD Jan 14 05:46:03.445000 audit: BPF prog-id=174 op=LOAD Jan 14 05:46:03.474059 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:03.445000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.542640 kernel: audit: type=1300 audit(1768369563.445:567): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.542781 kernel: audit: type=1327 audit(1768369563.445:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.445000 audit: BPF prog-id=174 op=UNLOAD Jan 14 05:46:03.611104 kernel: audit: type=1334 audit(1768369563.445:568): prog-id=174 op=UNLOAD Jan 14 05:46:03.445000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.713323 kernel: audit: type=1300 audit(1768369563.445:568): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.713472 kernel: audit: type=1327 audit(1768369563.445:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.446000 audit: BPF prog-id=175 op=LOAD Jan 14 05:46:03.727627 kernel: audit: type=1334 audit(1768369563.446:569): prog-id=175 op=LOAD Jan 14 05:46:03.446000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.827787 kernel: audit: type=1300 audit(1768369563.446:569): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.828037 kernel: audit: type=1327 audit(1768369563.446:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.446000 audit: BPF prog-id=176 op=LOAD Jan 14 05:46:03.446000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.446000 audit: BPF prog-id=176 op=UNLOAD Jan 14 05:46:03.446000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.446000 audit: BPF prog-id=175 op=UNLOAD Jan 14 05:46:03.446000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.446000 audit: BPF prog-id=177 op=LOAD Jan 14 05:46:03.446000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=5070 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:03.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633356537303235353634363338616163333862616137333836613636 Jan 14 05:46:03.846078 containerd[1620]: time="2026-01-14T05:46:03.845888977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9f97fd46d-kvn2d,Uid:a883e1fb-a961-4974-a5a0-9481f730a55a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f35e7025564638aac38baa7386a66dabcfc8043acf80b724048bee5aabe85fec\"" Jan 14 05:46:03.859371 containerd[1620]: time="2026-01-14T05:46:03.858703401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 05:46:03.959482 containerd[1620]: time="2026-01-14T05:46:03.958420927Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:03.966678 containerd[1620]: time="2026-01-14T05:46:03.965755870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 05:46:03.966678 containerd[1620]: time="2026-01-14T05:46:03.966152773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:03.968871 kubelet[2825]: E0114 05:46:03.968584 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:46:03.974876 kubelet[2825]: E0114 05:46:03.974318 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:46:03.974876 kubelet[2825]: E0114 05:46:03.974461 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:03.980545 containerd[1620]: time="2026-01-14T05:46:03.979873121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 05:46:04.066868 containerd[1620]: time="2026-01-14T05:46:04.066816102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:04.073696 containerd[1620]: time="2026-01-14T05:46:04.073518279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 05:46:04.074568 containerd[1620]: time="2026-01-14T05:46:04.074090832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:04.079436 kubelet[2825]: E0114 05:46:04.078648 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:46:04.079436 kubelet[2825]: E0114 05:46:04.078808 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:46:04.079436 kubelet[2825]: E0114 05:46:04.079111 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:04.079436 kubelet[2825]: E0114 05:46:04.079410 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:46:04.161366 containerd[1620]: time="2026-01-14T05:46:04.160515814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,}" Jan 14 05:46:04.161439 systemd-networkd[1505]: cali5df88f9f192: Gained IPv6LL Jan 14 05:46:04.163690 containerd[1620]: time="2026-01-14T05:46:04.163572997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,}" Jan 14 05:46:04.165627 containerd[1620]: time="2026-01-14T05:46:04.165442423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,}" Jan 14 05:46:04.745524 kubelet[2825]: E0114 05:46:04.739886 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:46:05.103892 kubelet[2825]: E0114 05:46:05.103626 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:05.277000 audit[5202]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:05.277000 audit[5202]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb193f9a0 a2=0 a3=7ffdb193f98c items=0 ppid=2986 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:05.284000 audit[5202]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=5202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:05.284000 audit[5202]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdb193f9a0 a2=0 a3=0 items=0 ppid=2986 pid=5202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:05.475000 audit: BPF prog-id=178 op=LOAD Jan 14 05:46:05.475000 audit[5226]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd66dd1280 a2=98 a3=1fffffffffffffff items=0 ppid=4984 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 05:46:05.476000 audit: BPF prog-id=178 op=UNLOAD Jan 14 05:46:05.476000 audit[5226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd66dd1250 a3=0 items=0 ppid=4984 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 05:46:05.476000 audit: BPF prog-id=179 op=LOAD Jan 14 05:46:05.476000 audit[5226]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd66dd1160 a2=94 a3=3 items=0 ppid=4984 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 05:46:05.476000 audit: BPF prog-id=179 op=UNLOAD Jan 14 05:46:05.476000 audit[5226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd66dd1160 a2=94 a3=3 items=0 ppid=4984 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 05:46:05.476000 audit: BPF prog-id=180 op=LOAD Jan 14 05:46:05.476000 audit[5226]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd66dd11a0 a2=94 a3=7ffd66dd1380 items=0 ppid=4984 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 05:46:05.476000 audit: BPF prog-id=180 op=UNLOAD Jan 14 05:46:05.476000 audit[5226]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd66dd11a0 a2=94 a3=7ffd66dd1380 items=0 ppid=4984 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.476000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 05:46:05.498000 audit: BPF prog-id=181 op=LOAD Jan 14 05:46:05.498000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe937ad50 a2=98 a3=3 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.498000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:05.499000 audit: BPF prog-id=181 op=UNLOAD Jan 14 05:46:05.499000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe937ad20 a3=0 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.499000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:05.519000 audit: BPF prog-id=182 op=LOAD Jan 14 05:46:05.519000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe937ab40 a2=94 a3=54428f items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.519000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:05.523000 audit: BPF prog-id=182 op=UNLOAD Jan 14 05:46:05.523000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe937ab40 a2=94 a3=54428f items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:05.523000 audit: BPF prog-id=183 op=LOAD Jan 14 05:46:05.523000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe937ab70 a2=94 a3=2 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.523000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:05.524000 audit: BPF prog-id=183 op=UNLOAD Jan 14 05:46:05.524000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe937ab70 a2=0 a3=2 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:05.524000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:05.747608 kubelet[2825]: E0114 05:46:05.745844 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:46:05.947111 systemd-networkd[1505]: calibc0bbaf51ae: Link UP Jan 14 05:46:05.950413 systemd-networkd[1505]: calibc0bbaf51ae: Gained carrier Jan 14 05:46:06.079839 containerd[1620]: 2026-01-14 05:46:05.086 [INFO][5149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0 calico-kube-controllers-85449f874f- calico-system 64a69192-713c-418d-907c-75ea3917f0cd 854 0 2026-01-14 05:45:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85449f874f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85449f874f-xn2d4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibc0bbaf51ae [] [] }} ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-" Jan 14 05:46:06.079839 containerd[1620]: 2026-01-14 05:46:05.086 [INFO][5149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.079839 containerd[1620]: 2026-01-14 05:46:05.617 [INFO][5200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" HandleID="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Workload="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.618 [INFO][5200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" HandleID="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Workload="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85449f874f-xn2d4", "timestamp":"2026-01-14 05:46:05.617797545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.618 [INFO][5200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.618 [INFO][5200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.618 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.686 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" host="localhost" Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.716 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.783 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.809 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.831 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:06.082908 containerd[1620]: 2026-01-14 05:46:05.831 [INFO][5200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" host="localhost" Jan 14 05:46:06.083582 containerd[1620]: 2026-01-14 05:46:05.844 [INFO][5200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a Jan 14 05:46:06.083582 containerd[1620]: 2026-01-14 05:46:05.865 [INFO][5200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" host="localhost" Jan 14 05:46:06.083582 containerd[1620]: 2026-01-14 05:46:05.901 [INFO][5200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" host="localhost" Jan 14 05:46:06.083582 containerd[1620]: 2026-01-14 05:46:05.902 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" host="localhost" Jan 14 05:46:06.083582 containerd[1620]: 2026-01-14 05:46:05.907 [INFO][5200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:06.083582 containerd[1620]: 2026-01-14 05:46:05.908 [INFO][5200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" HandleID="k8s-pod-network.a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Workload="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.083692 containerd[1620]: 2026-01-14 05:46:05.931 [INFO][5149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0", GenerateName:"calico-kube-controllers-85449f874f-", Namespace:"calico-system", SelfLink:"", UID:"64a69192-713c-418d-907c-75ea3917f0cd", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85449f874f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85449f874f-xn2d4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc0bbaf51ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:06.084077 containerd[1620]: 2026-01-14 05:46:05.931 [INFO][5149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.084077 containerd[1620]: 2026-01-14 05:46:05.932 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc0bbaf51ae ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.084077 containerd[1620]: 2026-01-14 05:46:06.018 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.084142 containerd[1620]: 2026-01-14 05:46:06.024 [INFO][5149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0", GenerateName:"calico-kube-controllers-85449f874f-", Namespace:"calico-system", SelfLink:"", UID:"64a69192-713c-418d-907c-75ea3917f0cd", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85449f874f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a", Pod:"calico-kube-controllers-85449f874f-xn2d4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc0bbaf51ae", MAC:"86:16:11:7b:80:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:06.084541 containerd[1620]: 2026-01-14 05:46:06.064 [INFO][5149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" Namespace="calico-system" Pod="calico-kube-controllers-85449f874f-xn2d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85449f874f--xn2d4-eth0" Jan 14 05:46:06.132382 containerd[1620]: time="2026-01-14T05:46:06.130569762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:46:06.243693 systemd-networkd[1505]: califac0ac8392c: Link UP Jan 14 05:46:06.262000 audit: BPF prog-id=184 op=LOAD Jan 14 05:46:06.262000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe937aa30 a2=94 a3=1 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.263000 audit: BPF prog-id=184 op=UNLOAD Jan 14 05:46:06.263000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffe937aa30 a2=94 a3=1 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.263000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.269512 systemd-networkd[1505]: califac0ac8392c: Gained carrier Jan 14 05:46:06.279000 audit: BPF prog-id=185 op=LOAD Jan 14 05:46:06.279000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe937aa20 a2=94 a3=4 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.279000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.281000 audit: BPF prog-id=185 op=UNLOAD Jan 14 05:46:06.281000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffe937aa20 a2=0 a3=4 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.281000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.282000 audit: BPF prog-id=186 op=LOAD Jan 14 05:46:06.282000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe937a880 a2=94 a3=5 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.282000 audit: BPF prog-id=186 op=UNLOAD Jan 14 05:46:06.282000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffe937a880 a2=0 a3=5 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.283000 audit: BPF prog-id=187 op=LOAD Jan 14 05:46:06.283000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe937aaa0 a2=94 a3=6 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.283000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.283000 audit: BPF prog-id=187 op=UNLOAD Jan 14 05:46:06.283000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffe937aaa0 a2=0 a3=6 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.283000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.283000 audit: BPF prog-id=188 op=LOAD Jan 14 05:46:06.283000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffe937a250 a2=94 a3=88 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.283000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.285000 audit: BPF prog-id=189 op=LOAD Jan 14 05:46:06.285000 audit[5227]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffe937a0d0 a2=94 a3=2 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.285000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.285000 audit: BPF prog-id=189 op=UNLOAD Jan 14 05:46:06.285000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffe937a100 a2=0 a3=7fffe937a200 items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.285000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.287000 audit: BPF prog-id=188 op=UNLOAD Jan 14 05:46:06.287000 audit[5227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=317e8d10 a2=0 a3=8358b4a9c9ac028e items=0 ppid=4984 pid=5227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.287000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 05:46:06.358000 audit: BPF prog-id=190 op=LOAD Jan 14 05:46:06.358000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2bc91160 a2=98 a3=1999999999999999 items=0 ppid=4984 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.358000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 05:46:06.359000 audit: BPF prog-id=190 op=UNLOAD Jan 14 05:46:06.359000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc2bc91130 a3=0 items=0 ppid=4984 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.359000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 05:46:06.360000 audit: BPF prog-id=191 op=LOAD Jan 14 05:46:06.360000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2bc91040 a2=94 a3=ffff items=0 ppid=4984 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.360000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 05:46:06.360000 audit: BPF prog-id=191 op=UNLOAD Jan 14 05:46:06.360000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc2bc91040 a2=94 a3=ffff items=0 ppid=4984 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.360000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 05:46:06.360000 audit: BPF prog-id=192 op=LOAD Jan 14 05:46:06.360000 audit[5250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2bc91080 a2=94 a3=7ffc2bc91260 items=0 ppid=4984 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.360000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 05:46:06.360000 audit: BPF prog-id=192 op=UNLOAD Jan 14 05:46:06.360000 audit[5250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc2bc91080 a2=94 a3=7ffc2bc91260 items=0 ppid=4984 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:06.360000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 05:46:06.385879 containerd[1620]: 2026-01-14 05:46:05.288 [INFO][5150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--46w2k-eth0 csi-node-driver- calico-system 2b560ec8-f090-4614-a1d5-13a4bc0ce8dc 733 0 2026-01-14 05:45:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-46w2k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califac0ac8392c [] [] }} ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-" Jan 14 05:46:06.385879 containerd[1620]: 2026-01-14 05:46:05.289 [INFO][5150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.385879 containerd[1620]: 2026-01-14 05:46:05.654 [INFO][5217] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" HandleID="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Workload="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:05.656 [INFO][5217] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" HandleID="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Workload="localhost-k8s-csi--node--driver--46w2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118f70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-46w2k", "timestamp":"2026-01-14 05:46:05.654731927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:05.656 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:05.904 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:05.904 [INFO][5217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:05.951 [INFO][5217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" host="localhost" Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:05.981 [INFO][5217] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:06.037 [INFO][5217] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:06.071 [INFO][5217] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:06.086 [INFO][5217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:06.386731 containerd[1620]: 2026-01-14 05:46:06.086 [INFO][5217] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" host="localhost" Jan 14 05:46:06.387533 containerd[1620]: 2026-01-14 05:46:06.099 [INFO][5217] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355 Jan 14 05:46:06.387533 containerd[1620]: 2026-01-14 05:46:06.129 [INFO][5217] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" host="localhost" Jan 14 05:46:06.387533 containerd[1620]: 2026-01-14 05:46:06.168 [INFO][5217] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" host="localhost" Jan 14 05:46:06.387533 containerd[1620]: 2026-01-14 05:46:06.169 [INFO][5217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" host="localhost" Jan 14 05:46:06.387533 containerd[1620]: 2026-01-14 05:46:06.169 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:06.387533 containerd[1620]: 2026-01-14 05:46:06.169 [INFO][5217] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" HandleID="k8s-pod-network.8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Workload="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.387642 containerd[1620]: 2026-01-14 05:46:06.228 [INFO][5150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--46w2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-46w2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califac0ac8392c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:06.387893 containerd[1620]: 2026-01-14 05:46:06.228 [INFO][5150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.387893 containerd[1620]: 2026-01-14 05:46:06.228 [INFO][5150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califac0ac8392c ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.387893 containerd[1620]: 2026-01-14 05:46:06.269 [INFO][5150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.388104 containerd[1620]: 2026-01-14 05:46:06.275 [INFO][5150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--46w2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2b560ec8-f090-4614-a1d5-13a4bc0ce8dc", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355", Pod:"csi-node-driver-46w2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califac0ac8392c", MAC:"92:4a:a7:9b:26:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:06.388634 containerd[1620]: 2026-01-14 05:46:06.343 [INFO][5150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" Namespace="calico-system" Pod="csi-node-driver-46w2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--46w2k-eth0" Jan 14 05:46:06.542493 containerd[1620]: time="2026-01-14T05:46:06.541495634Z" level=info msg="connecting to shim a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a" address="unix:///run/containerd/s/454a19bf8757c6693d734ee33d30d5b02945b5739300329df5a23e3231c8e469" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:06.650857 containerd[1620]: time="2026-01-14T05:46:06.650503065Z" level=info msg="connecting to shim 8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355" address="unix:///run/containerd/s/67b8fe5573f72bdab25f3248f1b0147b65705ff0c78f19e491727995307e5473" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:06.678591 systemd-networkd[1505]: calib40f1cbbb9e: Link UP Jan 14 05:46:06.680544 systemd-networkd[1505]: calib40f1cbbb9e: Gained carrier Jan 14 05:46:06.860830 containerd[1620]: 2026-01-14 05:46:05.125 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--h4bdc-eth0 goldmane-7c778bb748- calico-system 1cbfb118-b594-42d6-be3d-0e1840e8dae4 853 0 2026-01-14 05:44:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-h4bdc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib40f1cbbb9e [] [] }} ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-" Jan 14 05:46:06.860830 containerd[1620]: 2026-01-14 05:46:05.126 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.860830 containerd[1620]: 2026-01-14 05:46:05.680 [INFO][5209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" HandleID="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Workload="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:05.682 [INFO][5209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" HandleID="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Workload="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038ea10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-h4bdc", "timestamp":"2026-01-14 05:46:05.680851914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:05.682 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.174 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.175 [INFO][5209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.274 [INFO][5209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" host="localhost" Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.351 [INFO][5209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.427 [INFO][5209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.460 [INFO][5209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.486 [INFO][5209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:06.861519 containerd[1620]: 2026-01-14 05:46:06.486 [INFO][5209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" host="localhost" Jan 14 05:46:06.861921 containerd[1620]: 2026-01-14 05:46:06.523 [INFO][5209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4 Jan 14 05:46:06.861921 containerd[1620]: 2026-01-14 05:46:06.547 [INFO][5209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" host="localhost" Jan 14 05:46:06.861921 containerd[1620]: 2026-01-14 05:46:06.582 [INFO][5209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" host="localhost" Jan 14 05:46:06.861921 containerd[1620]: 2026-01-14 05:46:06.589 [INFO][5209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" host="localhost" Jan 14 05:46:06.861921 containerd[1620]: 2026-01-14 05:46:06.610 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:06.861921 containerd[1620]: 2026-01-14 05:46:06.610 [INFO][5209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" HandleID="k8s-pod-network.680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Workload="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.862153 containerd[1620]: 2026-01-14 05:46:06.667 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--h4bdc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1cbfb118-b594-42d6-be3d-0e1840e8dae4", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-h4bdc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib40f1cbbb9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:06.862153 containerd[1620]: 2026-01-14 05:46:06.667 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.864078 containerd[1620]: 2026-01-14 05:46:06.667 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib40f1cbbb9e ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.864078 containerd[1620]: 2026-01-14 05:46:06.681 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.864152 containerd[1620]: 2026-01-14 05:46:06.687 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--h4bdc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"1cbfb118-b594-42d6-be3d-0e1840e8dae4", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4", Pod:"goldmane-7c778bb748-h4bdc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib40f1cbbb9e", MAC:"7e:93:92:a9:19:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:06.864786 containerd[1620]: 2026-01-14 05:46:06.781 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" Namespace="calico-system" Pod="goldmane-7c778bb748-h4bdc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h4bdc-eth0" Jan 14 05:46:06.942148 systemd[1]: Started cri-containerd-a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a.scope - libcontainer container a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a. Jan 14 05:46:07.025571 systemd[1]: Started cri-containerd-8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355.scope - libcontainer container 8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355. Jan 14 05:46:07.126680 kubelet[2825]: E0114 05:46:07.122706 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:07.131803 containerd[1620]: time="2026-01-14T05:46:07.131549670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,}" Jan 14 05:46:07.157388 containerd[1620]: time="2026-01-14T05:46:07.156790030Z" level=info msg="connecting to shim 680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4" address="unix:///run/containerd/s/508e88236407a95d97276a9cc5ec8887df4ed9e1236e679e4d492d8e2807e3f8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:07.165000 audit: BPF prog-id=193 op=LOAD Jan 14 05:46:07.167000 audit: BPF prog-id=194 op=LOAD Jan 14 05:46:07.167000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.169000 audit: BPF prog-id=194 op=UNLOAD Jan 14 05:46:07.169000 audit[5292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.169000 audit: BPF prog-id=195 op=LOAD Jan 14 05:46:07.169000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.171000 audit: BPF prog-id=196 op=LOAD Jan 14 05:46:07.171000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.171000 audit: BPF prog-id=196 op=UNLOAD Jan 14 05:46:07.171000 audit[5292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.172000 audit: BPF prog-id=195 op=UNLOAD Jan 14 05:46:07.172000 audit[5292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.172000 audit: BPF prog-id=197 op=LOAD Jan 14 05:46:07.172000 audit[5292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5266 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135613863353939616566633439343739613139626663313163326461 Jan 14 05:46:07.202677 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:07.225705 systemd-networkd[1505]: calibc0bbaf51ae: Gained IPv6LL Jan 14 05:46:07.353664 systemd-networkd[1505]: califac0ac8392c: Gained IPv6LL Jan 14 05:46:07.386000 audit: BPF prog-id=198 op=LOAD Jan 14 05:46:07.399572 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:07.393000 audit: BPF prog-id=199 op=LOAD Jan 14 05:46:07.393000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.393000 audit: BPF prog-id=199 op=UNLOAD Jan 14 05:46:07.393000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.393000 audit: BPF prog-id=200 op=LOAD Jan 14 05:46:07.393000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.394000 audit: BPF prog-id=201 op=LOAD Jan 14 05:46:07.394000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.394000 audit: BPF prog-id=201 op=UNLOAD Jan 14 05:46:07.394000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.394000 audit: BPF prog-id=200 op=UNLOAD Jan 14 05:46:07.394000 audit[5313]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.394000 audit: BPF prog-id=202 op=LOAD Jan 14 05:46:07.394000 audit[5313]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5288 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862616339313039663431343333323633316235663763633635336531 Jan 14 05:46:07.562548 systemd-networkd[1505]: vxlan.calico: Link UP Jan 14 05:46:07.562559 systemd-networkd[1505]: vxlan.calico: Gained carrier Jan 14 05:46:07.651645 systemd[1]: Started cri-containerd-680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4.scope - libcontainer container 680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4. Jan 14 05:46:07.846494 containerd[1620]: time="2026-01-14T05:46:07.842659500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85449f874f-xn2d4,Uid:64a69192-713c-418d-907c-75ea3917f0cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5a8c599aefc49479a19bfc11c2da2be25278dec4f04ef8c4ba346a703c8020a\"" Jan 14 05:46:07.846494 containerd[1620]: time="2026-01-14T05:46:07.842898787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46w2k,Uid:2b560ec8-f090-4614-a1d5-13a4bc0ce8dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bac9109f414332631b5f7cc653e16d5a5fda0346963dd8d7e7af20e6a1fe355\"" Jan 14 05:46:07.865346 containerd[1620]: time="2026-01-14T05:46:07.862846907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 05:46:07.876000 audit: BPF prog-id=203 op=LOAD Jan 14 05:46:07.885000 audit: BPF prog-id=204 op=LOAD Jan 14 05:46:07.885000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.885000 audit: BPF prog-id=204 op=UNLOAD Jan 14 05:46:07.885000 audit[5392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.889000 audit: BPF prog-id=205 op=LOAD Jan 14 05:46:07.889000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.889000 audit: BPF prog-id=206 op=LOAD Jan 14 05:46:07.889000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.889000 audit: BPF prog-id=206 op=UNLOAD Jan 14 05:46:07.889000 audit[5392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.889000 audit: BPF prog-id=205 op=UNLOAD Jan 14 05:46:07.889000 audit[5392]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.889000 audit: BPF prog-id=207 op=LOAD Jan 14 05:46:07.889000 audit[5392]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5370 pid=5392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:07.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638303135316432653730643165356437633862373237646364373463 Jan 14 05:46:07.927824 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:07.932646 systemd-networkd[1505]: calib40f1cbbb9e: Gained IPv6LL Jan 14 05:46:07.977393 containerd[1620]: time="2026-01-14T05:46:07.975496324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:07.997647 containerd[1620]: time="2026-01-14T05:46:07.997590882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 05:46:08.000560 containerd[1620]: time="2026-01-14T05:46:07.997849453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:08.013921 kubelet[2825]: E0114 05:46:08.013680 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:46:08.013921 kubelet[2825]: E0114 05:46:08.013856 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:46:08.014597 kubelet[2825]: E0114 05:46:08.014455 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:08.020447 containerd[1620]: time="2026-01-14T05:46:08.018761578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 05:46:08.130944 containerd[1620]: time="2026-01-14T05:46:08.114641111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:08.141529 containerd[1620]: time="2026-01-14T05:46:08.132613981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:46:08.141529 containerd[1620]: time="2026-01-14T05:46:08.135676784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 05:46:08.141529 containerd[1620]: time="2026-01-14T05:46:08.135822717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:08.151000 audit: BPF prog-id=208 op=LOAD Jan 14 05:46:08.151000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe59fbbcb0 a2=98 a3=0 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.151000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.152000 audit: BPF prog-id=208 op=UNLOAD Jan 14 05:46:08.152000 audit[5446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe59fbbc80 a3=0 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.152000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.155883 kubelet[2825]: E0114 05:46:08.149719 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:46:08.155883 kubelet[2825]: E0114 05:46:08.149791 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:46:08.152000 audit: BPF prog-id=209 op=LOAD Jan 14 05:46:08.152000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe59fbbac0 a2=94 a3=54428f items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.152000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.157000 audit: BPF prog-id=209 op=UNLOAD Jan 14 05:46:08.157000 audit[5446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe59fbbac0 a2=94 a3=54428f items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.157000 audit: BPF prog-id=210 op=LOAD Jan 14 05:46:08.157000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe59fbbaf0 a2=94 a3=2 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.157000 audit: BPF prog-id=210 op=UNLOAD Jan 14 05:46:08.157000 audit[5446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe59fbbaf0 a2=0 a3=2 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.157000 audit: BPF prog-id=211 op=LOAD Jan 14 05:46:08.157000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe59fbb8a0 a2=94 a3=4 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.157000 audit: BPF prog-id=211 op=UNLOAD Jan 14 05:46:08.157000 audit[5446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe59fbb8a0 a2=94 a3=4 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.157000 audit: BPF prog-id=212 op=LOAD Jan 14 05:46:08.157000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe59fbb9a0 a2=94 a3=7ffe59fbbb20 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.158000 audit: BPF prog-id=212 op=UNLOAD Jan 14 05:46:08.158000 audit[5446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe59fbb9a0 a2=0 a3=7ffe59fbbb20 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.158000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.168000 audit: BPF prog-id=213 op=LOAD Jan 14 05:46:08.168000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe59fbb0d0 a2=94 a3=2 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.168000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.168000 audit: BPF prog-id=213 op=UNLOAD Jan 14 05:46:08.168000 audit[5446]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe59fbb0d0 a2=0 a3=2 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.168000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.171710 kubelet[2825]: E0114 05:46:08.168862 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:08.171710 kubelet[2825]: E0114 05:46:08.168905 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:46:08.171823 containerd[1620]: time="2026-01-14T05:46:08.167854881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 05:46:08.169000 audit: BPF prog-id=214 op=LOAD Jan 14 05:46:08.169000 audit[5446]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe59fbb1d0 a2=94 a3=30 items=0 ppid=4984 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.169000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 05:46:08.437000 audit: BPF prog-id=215 op=LOAD Jan 14 05:46:08.437000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8b1e97b0 a2=98 a3=0 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.437000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:08.437000 audit: BPF prog-id=215 op=UNLOAD Jan 14 05:46:08.437000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe8b1e9780 a3=0 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.437000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:08.437000 audit: BPF prog-id=216 op=LOAD Jan 14 05:46:08.437000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8b1e95a0 a2=94 a3=54428f items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.437000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:08.438000 audit: BPF prog-id=216 op=UNLOAD Jan 14 05:46:08.438000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe8b1e95a0 a2=94 a3=54428f items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:08.438000 audit: BPF prog-id=217 op=LOAD Jan 14 05:46:08.438000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8b1e95d0 a2=94 a3=2 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:08.438000 audit: BPF prog-id=217 op=UNLOAD Jan 14 05:46:08.438000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe8b1e95d0 a2=0 a3=2 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:08.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:08.510849 systemd-networkd[1505]: caliaa4141e6277: Link UP Jan 14 05:46:08.517411 containerd[1620]: time="2026-01-14T05:46:08.514535583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:08.525863 containerd[1620]: time="2026-01-14T05:46:08.523718078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 05:46:08.525863 containerd[1620]: time="2026-01-14T05:46:08.523888628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:08.528144 systemd-networkd[1505]: caliaa4141e6277: Gained carrier Jan 14 05:46:08.537768 kubelet[2825]: E0114 05:46:08.529514 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:46:08.537768 kubelet[2825]: E0114 05:46:08.529556 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:46:08.537768 kubelet[2825]: E0114 05:46:08.529638 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:08.537768 kubelet[2825]: E0114 05:46:08.529672 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:46:08.600611 containerd[1620]: time="2026-01-14T05:46:08.600571695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h4bdc,Uid:1cbfb118-b594-42d6-be3d-0e1840e8dae4,Namespace:calico-system,Attempt:0,} returns sandbox id \"680151d2e70d1e5d7c8b727dcd74cbdddf1ee737882ac4d43b4e9fcf72aed5e4\"" Jan 14 05:46:08.626740 containerd[1620]: time="2026-01-14T05:46:08.626620335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 05:46:08.634485 systemd-networkd[1505]: vxlan.calico: Gained IPv6LL Jan 14 05:46:08.669702 containerd[1620]: 2026-01-14 05:46:07.149 [INFO][5327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0 calico-apiserver-c8b67549f- calico-apiserver f8e3b291-7413-4398-b3ac-57e03796db9f 848 0 2026-01-14 05:44:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c8b67549f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c8b67549f-nrl4m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa4141e6277 [] [] }} ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-" Jan 14 05:46:08.669702 containerd[1620]: 2026-01-14 05:46:07.150 [INFO][5327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.669702 containerd[1620]: 2026-01-14 05:46:07.859 [INFO][5381] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" HandleID="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Workload="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:07.866 [INFO][5381] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" HandleID="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Workload="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c8b67549f-nrl4m", "timestamp":"2026-01-14 05:46:07.859457763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:07.869 [INFO][5381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:07.869 [INFO][5381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:07.869 [INFO][5381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:07.950 [INFO][5381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" host="localhost" Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:07.983 [INFO][5381] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:08.079 [INFO][5381] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:08.092 [INFO][5381] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:08.123 [INFO][5381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:08.670641 containerd[1620]: 2026-01-14 05:46:08.125 [INFO][5381] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" host="localhost" Jan 14 05:46:08.671671 containerd[1620]: 2026-01-14 05:46:08.152 [INFO][5381] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a Jan 14 05:46:08.671671 containerd[1620]: 2026-01-14 05:46:08.255 [INFO][5381] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" host="localhost" Jan 14 05:46:08.671671 containerd[1620]: 2026-01-14 05:46:08.393 [INFO][5381] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" host="localhost" Jan 14 05:46:08.671671 containerd[1620]: 2026-01-14 05:46:08.401 [INFO][5381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" host="localhost" Jan 14 05:46:08.671671 containerd[1620]: 2026-01-14 05:46:08.406 [INFO][5381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:08.671671 containerd[1620]: 2026-01-14 05:46:08.411 [INFO][5381] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" HandleID="k8s-pod-network.bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Workload="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.671861 containerd[1620]: 2026-01-14 05:46:08.473 [INFO][5327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0", GenerateName:"calico-apiserver-c8b67549f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8e3b291-7413-4398-b3ac-57e03796db9f", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8b67549f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c8b67549f-nrl4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa4141e6277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:08.672865 containerd[1620]: 2026-01-14 05:46:08.481 [INFO][5327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.672865 containerd[1620]: 2026-01-14 05:46:08.481 [INFO][5327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa4141e6277 ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.672865 containerd[1620]: 2026-01-14 05:46:08.540 [INFO][5327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.672973 containerd[1620]: 2026-01-14 05:46:08.551 [INFO][5327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0", GenerateName:"calico-apiserver-c8b67549f-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8e3b291-7413-4398-b3ac-57e03796db9f", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8b67549f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a", Pod:"calico-apiserver-c8b67549f-nrl4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa4141e6277", MAC:"72:40:07:7a:30:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:08.673620 containerd[1620]: 2026-01-14 05:46:08.626 [INFO][5327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-nrl4m" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--nrl4m-eth0" Jan 14 05:46:08.790771 containerd[1620]: time="2026-01-14T05:46:08.789597627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:08.865844 containerd[1620]: time="2026-01-14T05:46:08.865781425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 05:46:08.867764 containerd[1620]: time="2026-01-14T05:46:08.867715225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:08.868661 kubelet[2825]: E0114 05:46:08.868629 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:46:08.869568 kubelet[2825]: E0114 05:46:08.868800 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:46:08.872560 kubelet[2825]: E0114 05:46:08.872527 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:08.872674 kubelet[2825]: E0114 05:46:08.872649 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:46:08.983443 kubelet[2825]: E0114 05:46:08.982442 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:46:09.010469 kubelet[2825]: E0114 05:46:09.009592 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:46:09.085794 containerd[1620]: time="2026-01-14T05:46:09.082449397Z" level=info msg="connecting to shim bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a" address="unix:///run/containerd/s/41d60dee01d0c408b5efe059aac9744d37c2e8cdffef3a2d0b520b5f21c949c2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:09.117922 containerd[1620]: time="2026-01-14T05:46:09.116813953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,}" Jan 14 05:46:09.120732 kubelet[2825]: E0114 05:46:09.120707 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:09.130649 containerd[1620]: time="2026-01-14T05:46:09.130470502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,}" Jan 14 05:46:09.329411 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 14 05:46:09.329560 kernel: audit: type=1334 audit(1768369569.295:649): prog-id=218 op=LOAD Jan 14 05:46:09.295000 audit: BPF prog-id=218 op=LOAD Jan 14 05:46:09.295000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8b1e9490 a2=94 a3=1 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.400576 kernel: audit: type=1300 audit(1768369569.295:649): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8b1e9490 a2=94 a3=1 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.295000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.461710 kernel: audit: type=1327 audit(1768369569.295:649): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.461798 kernel: audit: type=1334 audit(1768369569.332:650): prog-id=218 op=UNLOAD Jan 14 05:46:09.332000 audit: BPF prog-id=218 op=UNLOAD Jan 14 05:46:09.513944 kernel: audit: type=1300 audit(1768369569.332:650): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe8b1e9490 a2=94 a3=1 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.332000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe8b1e9490 a2=94 a3=1 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.332000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.561446 kernel: audit: type=1327 audit(1768369569.332:650): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.408000 audit: BPF prog-id=219 op=LOAD Jan 14 05:46:09.626698 kernel: audit: type=1334 audit(1768369569.408:651): prog-id=219 op=LOAD Jan 14 05:46:09.626824 kernel: audit: type=1300 audit(1768369569.408:651): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe8b1e9480 a2=94 a3=4 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.408000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe8b1e9480 a2=94 a3=4 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.672627 kernel: audit: type=1327 audit(1768369569.408:651): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.408000 audit: BPF prog-id=219 op=UNLOAD Jan 14 05:46:09.691683 kernel: audit: type=1334 audit(1768369569.408:652): prog-id=219 op=UNLOAD Jan 14 05:46:09.408000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe8b1e9480 a2=0 a3=4 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.408000 audit: BPF prog-id=220 op=LOAD Jan 14 05:46:09.408000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8b1e92e0 a2=94 a3=5 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.408000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.409000 audit: BPF prog-id=220 op=UNLOAD Jan 14 05:46:09.409000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe8b1e92e0 a2=0 a3=5 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.409000 audit: BPF prog-id=221 op=LOAD Jan 14 05:46:09.409000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe8b1e9500 a2=94 a3=6 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.409000 audit: BPF prog-id=221 op=UNLOAD Jan 14 05:46:09.409000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe8b1e9500 a2=0 a3=6 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.409000 audit: BPF prog-id=222 op=LOAD Jan 14 05:46:09.409000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe8b1e8cb0 a2=94 a3=88 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.409000 audit: BPF prog-id=223 op=LOAD Jan 14 05:46:09.409000 audit[5450]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe8b1e8b30 a2=94 a3=2 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.409000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.410000 audit: BPF prog-id=223 op=UNLOAD Jan 14 05:46:09.410000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe8b1e8b60 a2=0 a3=7ffe8b1e8c60 items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.410000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.410000 audit: BPF prog-id=222 op=UNLOAD Jan 14 05:46:09.410000 audit[5450]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1b1c0d10 a2=0 a3=b5bce9c4a4314d7a items=0 ppid=4984 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.410000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 05:46:09.743000 audit: BPF prog-id=214 op=UNLOAD Jan 14 05:46:09.743000 audit[4984]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007acb80 a2=0 a3=0 items=0 ppid=4968 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:09.743000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 05:46:09.760875 systemd[1]: Started cri-containerd-bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a.scope - libcontainer container bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a. Jan 14 05:46:09.980662 systemd-networkd[1505]: caliaa4141e6277: Gained IPv6LL Jan 14 05:46:10.062846 kubelet[2825]: E0114 05:46:10.057793 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:46:10.062846 kubelet[2825]: E0114 05:46:10.057899 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:46:10.062846 kubelet[2825]: E0114 05:46:10.057965 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:46:10.098000 audit: BPF prog-id=224 op=LOAD Jan 14 05:46:10.103000 audit: BPF prog-id=225 op=LOAD Jan 14 05:46:10.103000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.103000 audit: BPF prog-id=225 op=UNLOAD Jan 14 05:46:10.103000 audit[5511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.107000 audit: BPF prog-id=226 op=LOAD Jan 14 05:46:10.107000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.107000 audit: BPF prog-id=227 op=LOAD Jan 14 05:46:10.107000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.107000 audit: BPF prog-id=227 op=UNLOAD Jan 14 05:46:10.107000 audit[5511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.107000 audit: BPF prog-id=226 op=UNLOAD Jan 14 05:46:10.107000 audit[5511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.107000 audit: BPF prog-id=228 op=LOAD Jan 14 05:46:10.107000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5497 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263653034356438376565343139643662336231386436373238373231 Jan 14 05:46:10.118636 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:10.510543 containerd[1620]: time="2026-01-14T05:46:10.509635424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-nrl4m,Uid:f8e3b291-7413-4398-b3ac-57e03796db9f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bce045d87ee419d6b3b18d6728721aa17a4a355e477bab6ba5da6bcc13a5901a\"" Jan 14 05:46:10.538972 containerd[1620]: time="2026-01-14T05:46:10.536985860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:10.680421 containerd[1620]: time="2026-01-14T05:46:10.678683348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:10.695726 containerd[1620]: time="2026-01-14T05:46:10.694461717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:10.695726 containerd[1620]: time="2026-01-14T05:46:10.694573937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:10.699129 kubelet[2825]: E0114 05:46:10.698467 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:10.699129 kubelet[2825]: E0114 05:46:10.698645 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:10.699129 kubelet[2825]: E0114 05:46:10.698730 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:10.699129 kubelet[2825]: E0114 05:46:10.698777 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:46:10.873605 systemd-networkd[1505]: cali17b081d38a7: Link UP Jan 14 05:46:10.890637 systemd-networkd[1505]: cali17b081d38a7: Gained carrier Jan 14 05:46:10.991000 audit[5628]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:10.991000 audit[5628]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc9ea02640 a2=0 a3=7ffc9ea0262c items=0 ppid=2986 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:10.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:11.041000 audit[5628]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=5628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:11.041000 audit[5628]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc9ea02640 a2=0 a3=0 items=0 ppid=2986 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:11.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:11.050733 containerd[1620]: 2026-01-14 05:46:09.583 [INFO][5464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0 calico-apiserver-c8b67549f- calico-apiserver 5832da08-4ce6-484b-b421-5f73ad1ce8d2 857 0 2026-01-14 05:44:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c8b67549f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c8b67549f-bpw89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali17b081d38a7 [] [] }} ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-" Jan 14 05:46:11.050733 containerd[1620]: 2026-01-14 05:46:09.591 [INFO][5464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.050733 containerd[1620]: 2026-01-14 05:46:10.091 [INFO][5558] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" HandleID="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Workload="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.121 [INFO][5558] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" HandleID="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Workload="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c8b67549f-bpw89", "timestamp":"2026-01-14 05:46:10.091765685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.124 [INFO][5558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.124 [INFO][5558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.124 [INFO][5558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.431 [INFO][5558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" host="localhost" Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.536 [INFO][5558] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.601 [INFO][5558] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.628 [INFO][5558] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.645 [INFO][5558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:11.051778 containerd[1620]: 2026-01-14 05:46:10.646 [INFO][5558] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" host="localhost" Jan 14 05:46:11.052707 containerd[1620]: 2026-01-14 05:46:10.664 [INFO][5558] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74 Jan 14 05:46:11.052707 containerd[1620]: 2026-01-14 05:46:10.709 [INFO][5558] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" host="localhost" Jan 14 05:46:11.052707 containerd[1620]: 2026-01-14 05:46:10.756 [INFO][5558] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" host="localhost" Jan 14 05:46:11.052707 containerd[1620]: 2026-01-14 05:46:10.774 [INFO][5558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" host="localhost" Jan 14 05:46:11.052707 containerd[1620]: 2026-01-14 05:46:10.775 [INFO][5558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:11.052707 containerd[1620]: 2026-01-14 05:46:10.780 [INFO][5558] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" HandleID="k8s-pod-network.af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Workload="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.052895 containerd[1620]: 2026-01-14 05:46:10.820 [INFO][5464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0", GenerateName:"calico-apiserver-c8b67549f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5832da08-4ce6-484b-b421-5f73ad1ce8d2", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8b67549f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c8b67549f-bpw89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17b081d38a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:11.053687 containerd[1620]: 2026-01-14 05:46:10.821 [INFO][5464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.053687 containerd[1620]: 2026-01-14 05:46:10.821 [INFO][5464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17b081d38a7 ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.053687 containerd[1620]: 2026-01-14 05:46:10.892 [INFO][5464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.053800 containerd[1620]: 2026-01-14 05:46:10.896 [INFO][5464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0", GenerateName:"calico-apiserver-c8b67549f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5832da08-4ce6-484b-b421-5f73ad1ce8d2", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c8b67549f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74", Pod:"calico-apiserver-c8b67549f-bpw89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17b081d38a7", MAC:"26:8f:f4:a4:4f:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:11.054465 containerd[1620]: 2026-01-14 05:46:10.994 [INFO][5464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" Namespace="calico-apiserver" Pod="calico-apiserver-c8b67549f-bpw89" WorkloadEndpoint="localhost-k8s-calico--apiserver--c8b67549f--bpw89-eth0" Jan 14 05:46:11.089155 kubelet[2825]: E0114 05:46:11.086705 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:46:11.098846 kubelet[2825]: E0114 05:46:11.091758 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:46:11.104880 kubelet[2825]: E0114 05:46:11.104699 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:11.259000 audit[5646]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=5646 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 05:46:11.259000 audit[5646]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffca0517110 a2=0 a3=7ffca05170fc items=0 ppid=4984 pid=5646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:11.259000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 05:46:11.284018 containerd[1620]: time="2026-01-14T05:46:11.279829698Z" level=info msg="connecting to shim af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74" address="unix:///run/containerd/s/c6120c2fea604c4526633e90e8ad6f4addae9bc4aab4763812f6e63c6dc42361" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:11.339000 audit[5643]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=5643 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 05:46:11.339000 audit[5643]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdf34c1c50 a2=0 a3=7ffdf34c1c3c items=0 ppid=4984 pid=5643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:11.339000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 05:46:11.380849 systemd-networkd[1505]: cali585591ef41c: Link UP Jan 14 05:46:11.383000 audit[5644]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5644 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 05:46:11.386703 systemd-networkd[1505]: cali585591ef41c: Gained carrier Jan 14 05:46:11.383000 audit[5644]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe7834c890 a2=0 a3=7ffe7834c87c items=0 ppid=4984 pid=5644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:11.383000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 05:46:11.475578 containerd[1620]: 2026-01-14 05:46:09.619 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--5h66t-eth0 coredns-66bc5c9577- kube-system e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156 841 0 2026-01-14 05:44:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-5h66t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali585591ef41c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-" Jan 14 05:46:11.475578 containerd[1620]: 2026-01-14 05:46:09.703 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.475578 containerd[1620]: 2026-01-14 05:46:10.172 [INFO][5560] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" HandleID="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Workload="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:10.184 [INFO][5560] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" HandleID="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Workload="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-5h66t", "timestamp":"2026-01-14 05:46:10.172779282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:10.184 [INFO][5560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:10.783 [INFO][5560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:10.783 [INFO][5560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:10.918 [INFO][5560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" host="localhost" Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:11.032 [INFO][5560] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:11.088 [INFO][5560] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:11.107 [INFO][5560] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:11.123 [INFO][5560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:11.475996 containerd[1620]: 2026-01-14 05:46:11.126 [INFO][5560] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" host="localhost" Jan 14 05:46:11.476793 containerd[1620]: 2026-01-14 05:46:11.147 [INFO][5560] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67 Jan 14 05:46:11.476793 containerd[1620]: 2026-01-14 05:46:11.190 [INFO][5560] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" host="localhost" Jan 14 05:46:11.476793 containerd[1620]: 2026-01-14 05:46:11.254 [INFO][5560] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" host="localhost" Jan 14 05:46:11.476793 containerd[1620]: 2026-01-14 05:46:11.254 [INFO][5560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" host="localhost" Jan 14 05:46:11.476793 containerd[1620]: 2026-01-14 05:46:11.254 [INFO][5560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:11.476793 containerd[1620]: 2026-01-14 05:46:11.255 [INFO][5560] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" HandleID="k8s-pod-network.e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Workload="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.476906 containerd[1620]: 2026-01-14 05:46:11.286 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5h66t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-5h66t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali585591ef41c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:11.476906 containerd[1620]: 2026-01-14 05:46:11.286 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.476906 containerd[1620]: 2026-01-14 05:46:11.286 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali585591ef41c ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.476906 containerd[1620]: 2026-01-14 05:46:11.399 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.476906 containerd[1620]: 2026-01-14 05:46:11.407 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5h66t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67", Pod:"coredns-66bc5c9577-5h66t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali585591ef41c", MAC:"2e:f2:75:a7:1b:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:11.476906 containerd[1620]: 2026-01-14 05:46:11.461 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" Namespace="kube-system" Pod="coredns-66bc5c9577-5h66t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5h66t-eth0" Jan 14 05:46:11.665606 systemd[1]: Started cri-containerd-af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74.scope - libcontainer container af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74. Jan 14 05:46:11.715642 containerd[1620]: time="2026-01-14T05:46:11.715595358Z" level=info msg="connecting to shim e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67" address="unix:///run/containerd/s/4ab6035a324a0371e7271bd6d1878dfefc85ba45a0be96a67a26ea9c70a571ed" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:11.723846 systemd-networkd[1505]: cali552473f7135: Link UP Jan 14 05:46:11.725689 systemd-networkd[1505]: cali552473f7135: Gained carrier Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:10.469 [INFO][5531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0 calico-apiserver-7d668c555c- calico-apiserver e1f153ba-430a-43e5-84a9-e29936603f76 851 0 2026-01-14 05:44:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d668c555c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d668c555c-qwjx8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali552473f7135 [] [] }} ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:10.470 [INFO][5531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:10.926 [INFO][5620] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" HandleID="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Workload="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:10.927 [INFO][5620] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" HandleID="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Workload="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d668c555c-qwjx8", "timestamp":"2026-01-14 05:46:10.926516032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:10.929 [INFO][5620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.272 [INFO][5620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.272 [INFO][5620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.349 [INFO][5620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.454 [INFO][5620] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.503 [INFO][5620] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.539 [INFO][5620] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.565 [INFO][5620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.565 [INFO][5620] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.588 [INFO][5620] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.622 [INFO][5620] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.672 [INFO][5620] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.672 [INFO][5620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" host="localhost" Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.672 [INFO][5620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:11.819656 containerd[1620]: 2026-01-14 05:46:11.672 [INFO][5620] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" HandleID="k8s-pod-network.c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Workload="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.823955 containerd[1620]: 2026-01-14 05:46:11.706 [INFO][5531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0", GenerateName:"calico-apiserver-7d668c555c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1f153ba-430a-43e5-84a9-e29936603f76", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d668c555c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d668c555c-qwjx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali552473f7135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:11.823955 containerd[1620]: 2026-01-14 05:46:11.706 [INFO][5531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.823955 containerd[1620]: 2026-01-14 05:46:11.706 [INFO][5531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali552473f7135 ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.823955 containerd[1620]: 2026-01-14 05:46:11.726 [INFO][5531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.823955 containerd[1620]: 2026-01-14 05:46:11.727 [INFO][5531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0", GenerateName:"calico-apiserver-7d668c555c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e1f153ba-430a-43e5-84a9-e29936603f76", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d668c555c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d", Pod:"calico-apiserver-7d668c555c-qwjx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali552473f7135", MAC:"ea:d1:b9:60:6e:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:11.823955 containerd[1620]: 2026-01-14 05:46:11.767 [INFO][5531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" Namespace="calico-apiserver" Pod="calico-apiserver-7d668c555c-qwjx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d668c555c--qwjx8-eth0" Jan 14 05:46:11.730000 audit[5674]: NETFILTER_CFG table=filter:124 family=2 entries=192 op=nft_register_chain pid=5674 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 05:46:11.730000 audit[5674]: SYSCALL arch=c000003e syscall=46 success=yes exit=111600 a0=3 a1=7fffc1583a50 a2=0 a3=7fffc1583a3c items=0 ppid=4984 pid=5674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:11.730000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 05:46:11.965698 systemd-networkd[1505]: cali9aa4130b243: Link UP Jan 14 05:46:11.986450 systemd-networkd[1505]: cali9aa4130b243: Gained carrier Jan 14 05:46:12.029399 containerd[1620]: time="2026-01-14T05:46:12.028525140Z" level=info msg="connecting to shim c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d" address="unix:///run/containerd/s/f185c53f47d2892fd60dec29afe4d992aafb8f89165c43114860c0a03b9a41c7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:12.035000 audit: BPF prog-id=229 op=LOAD Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:10.466 [INFO][5527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--w9fc7-eth0 coredns-66bc5c9577- kube-system cefaac90-18ac-4910-8420-3803dde0c763 855 0 2026-01-14 05:44:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-w9fc7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9aa4130b243 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:10.469 [INFO][5527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:10.918 [INFO][5612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" HandleID="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Workload="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:10.962 [INFO][5612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" HandleID="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Workload="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000187b80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-w9fc7", "timestamp":"2026-01-14 05:46:10.918712017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:10.962 [INFO][5612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.673 [INFO][5612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.673 [INFO][5612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.701 [INFO][5612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.742 [INFO][5612] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.798 [INFO][5612] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.815 [INFO][5612] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.829 [INFO][5612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.830 [INFO][5612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.840 [INFO][5612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.873 [INFO][5612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.908 [INFO][5612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.908 [INFO][5612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" host="localhost" Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.909 [INFO][5612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 05:46:12.037542 containerd[1620]: 2026-01-14 05:46:11.909 [INFO][5612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" HandleID="k8s-pod-network.22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Workload="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.043000 audit: BPF prog-id=230 op=LOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.043000 audit: BPF prog-id=230 op=UNLOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.043000 audit: BPF prog-id=231 op=LOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.043000 audit: BPF prog-id=232 op=LOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.043000 audit: BPF prog-id=232 op=UNLOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.043000 audit: BPF prog-id=231 op=UNLOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.043000 audit: BPF prog-id=233 op=LOAD Jan 14 05:46:12.043000 audit[5673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5654 pid=5673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383863623536306461306363343933643336326536353337653431 Jan 14 05:46:12.049989 containerd[1620]: 2026-01-14 05:46:11.929 [INFO][5527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w9fc7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cefaac90-18ac-4910-8420-3803dde0c763", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-w9fc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aa4130b243", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:12.049989 containerd[1620]: 2026-01-14 05:46:11.929 [INFO][5527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.049989 containerd[1620]: 2026-01-14 05:46:11.929 [INFO][5527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aa4130b243 ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.049989 containerd[1620]: 2026-01-14 05:46:11.988 [INFO][5527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.049989 containerd[1620]: 2026-01-14 05:46:11.989 [INFO][5527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--w9fc7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cefaac90-18ac-4910-8420-3803dde0c763", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 5, 44, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a", Pod:"coredns-66bc5c9577-w9fc7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9aa4130b243", MAC:"02:45:76:d9:2c:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 05:46:12.049989 containerd[1620]: 2026-01-14 05:46:12.025 [INFO][5527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" Namespace="kube-system" Pod="coredns-66bc5c9577-w9fc7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--w9fc7-eth0" Jan 14 05:46:12.097718 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:12.108901 kubelet[2825]: E0114 05:46:12.107479 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:46:12.188689 systemd[1]: Started cri-containerd-e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67.scope - libcontainer container e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67. Jan 14 05:46:12.297000 audit: BPF prog-id=234 op=LOAD Jan 14 05:46:12.300000 audit: BPF prog-id=235 op=LOAD Jan 14 05:46:12.300000 audit[5730]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000142238 a2=98 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.300000 audit: BPF prog-id=235 op=UNLOAD Jan 14 05:46:12.300000 audit[5730]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.302000 audit: BPF prog-id=236 op=LOAD Jan 14 05:46:12.302000 audit[5730]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000142488 a2=98 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.302000 audit: BPF prog-id=237 op=LOAD Jan 14 05:46:12.302000 audit[5730]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000142218 a2=98 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.302000 audit: BPF prog-id=237 op=UNLOAD Jan 14 05:46:12.302000 audit[5730]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.302000 audit: BPF prog-id=236 op=UNLOAD Jan 14 05:46:12.302000 audit[5730]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.302000 audit: BPF prog-id=238 op=LOAD Jan 14 05:46:12.302000 audit[5730]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001426e8 a2=98 a3=0 items=0 ppid=5703 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532306532316366656636646531306238653261303364636165363531 Jan 14 05:46:12.315699 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:12.361580 systemd[1]: Started cri-containerd-c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d.scope - libcontainer container c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d. Jan 14 05:46:12.415000 audit[5789]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:12.415000 audit[5789]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd02da3f50 a2=0 a3=7ffd02da3f3c items=0 ppid=2986 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:12.425519 containerd[1620]: time="2026-01-14T05:46:12.425460616Z" level=info msg="connecting to shim 22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a" address="unix:///run/containerd/s/8893fa2a767d070af28b748d5d8afe32638600aed20839b91aa241b54b4955af" namespace=k8s.io protocol=ttrpc version=3 Jan 14 05:46:12.432000 audit[5789]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:12.432000 audit[5789]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd02da3f50 a2=0 a3=0 items=0 ppid=2986 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:12.617525 containerd[1620]: time="2026-01-14T05:46:12.612786880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5h66t,Uid:e9fea0eb-bdf0-4dc9-b3cb-7a90544d4156,Namespace:kube-system,Attempt:0,} returns sandbox id \"e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67\"" Jan 14 05:46:12.624920 kubelet[2825]: E0114 05:46:12.622809 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:12.653882 containerd[1620]: time="2026-01-14T05:46:12.653832909Z" level=info msg="CreateContainer within sandbox \"e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 05:46:12.666014 systemd-networkd[1505]: cali585591ef41c: Gained IPv6LL Jan 14 05:46:12.696913 systemd[1]: Started cri-containerd-22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a.scope - libcontainer container 22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a. Jan 14 05:46:12.736844 containerd[1620]: time="2026-01-14T05:46:12.736652009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c8b67549f-bpw89,Uid:5832da08-4ce6-484b-b421-5f73ad1ce8d2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af88cb560da0cc493d362e6537e416c58e51519dcecda14d502377527c472d74\"" Jan 14 05:46:12.761971 containerd[1620]: time="2026-01-14T05:46:12.761610741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:12.845820 containerd[1620]: time="2026-01-14T05:46:12.845778415Z" level=info msg="Container 81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:46:12.845809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157684228.mount: Deactivated successfully. Jan 14 05:46:12.859000 audit: BPF prog-id=239 op=LOAD Jan 14 05:46:12.856996 systemd-networkd[1505]: cali17b081d38a7: Gained IPv6LL Jan 14 05:46:12.864000 audit: BPF prog-id=240 op=LOAD Jan 14 05:46:12.871000 audit: BPF prog-id=241 op=LOAD Jan 14 05:46:12.871000 audit[5768]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000134238 a2=98 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.871000 audit: BPF prog-id=241 op=UNLOAD Jan 14 05:46:12.871000 audit[5768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.873000 audit: BPF prog-id=242 op=LOAD Jan 14 05:46:12.873000 audit[5828]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000152238 a2=98 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.873000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.874000 audit: BPF prog-id=242 op=UNLOAD Jan 14 05:46:12.874000 audit[5828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.875000 audit: BPF prog-id=243 op=LOAD Jan 14 05:46:12.875000 audit[5828]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000152488 a2=98 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.876000 audit: BPF prog-id=244 op=LOAD Jan 14 05:46:12.876000 audit[5828]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000152218 a2=98 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.876000 audit: BPF prog-id=244 op=UNLOAD Jan 14 05:46:12.876000 audit: BPF prog-id=245 op=LOAD Jan 14 05:46:12.876000 audit[5768]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000134488 a2=98 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.876000 audit: BPF prog-id=246 op=LOAD Jan 14 05:46:12.876000 audit[5768]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000134218 a2=98 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.876000 audit: BPF prog-id=246 op=UNLOAD Jan 14 05:46:12.876000 audit[5768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.876000 audit: BPF prog-id=245 op=UNLOAD Jan 14 05:46:12.876000 audit[5828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit[5768]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.876000 audit: BPF prog-id=247 op=LOAD Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.882000 audit: BPF prog-id=243 op=UNLOAD Jan 14 05:46:12.882000 audit[5828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.841000 audit[5841]: NETFILTER_CFG table=filter:127 family=2 entries=180 op=nft_register_chain pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 05:46:12.882000 audit: BPF prog-id=248 op=LOAD Jan 14 05:46:12.882000 audit[5828]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001526e8 a2=98 a3=0 items=0 ppid=5799 pid=5828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232663836653332633537386464633237663263336361343737346335 Jan 14 05:46:12.876000 audit[5768]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001346e8 a2=98 a3=0 items=0 ppid=5746 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331333236363130323934396535333034623038343862623635633432 Jan 14 05:46:12.841000 audit[5841]: SYSCALL arch=c000003e syscall=46 success=yes exit=106120 a0=3 a1=7fff82101e90 a2=0 a3=7fff82101e7c items=0 ppid=4984 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:12.841000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 05:46:12.901868 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:12.915888 systemd-resolved[1291]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 05:46:12.955652 containerd[1620]: time="2026-01-14T05:46:12.954921255Z" level=info msg="CreateContainer within sandbox \"e20e21cfef6de10b8e2a03dcae651fb9ba748430ea3cecaab90f684a03732e67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037\"" Jan 14 05:46:12.972545 containerd[1620]: time="2026-01-14T05:46:12.971676416Z" level=info msg="StartContainer for \"81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037\"" Jan 14 05:46:12.974770 containerd[1620]: time="2026-01-14T05:46:12.974588596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:12.984554 containerd[1620]: time="2026-01-14T05:46:12.984520181Z" level=info msg="connecting to shim 81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037" address="unix:///run/containerd/s/4ab6035a324a0371e7271bd6d1878dfefc85ba45a0be96a67a26ea9c70a571ed" protocol=ttrpc version=3 Jan 14 05:46:12.994684 containerd[1620]: time="2026-01-14T05:46:12.994521296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:12.995037 containerd[1620]: time="2026-01-14T05:46:12.994756386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:13.010809 kubelet[2825]: E0114 05:46:13.007751 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:13.010809 kubelet[2825]: E0114 05:46:13.008499 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:13.010809 kubelet[2825]: E0114 05:46:13.008828 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:13.010809 kubelet[2825]: E0114 05:46:13.008863 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:46:13.113543 systemd-networkd[1505]: cali552473f7135: Gained IPv6LL Jan 14 05:46:13.160989 systemd[1]: Started cri-containerd-81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037.scope - libcontainer container 81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037. Jan 14 05:46:13.166823 kubelet[2825]: E0114 05:46:13.163773 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:46:13.302000 audit: BPF prog-id=249 op=LOAD Jan 14 05:46:13.310000 audit: BPF prog-id=250 op=LOAD Jan 14 05:46:13.310000 audit[5856]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000214238 a2=98 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.311000 audit: BPF prog-id=250 op=UNLOAD Jan 14 05:46:13.311000 audit[5856]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.312000 audit: BPF prog-id=251 op=LOAD Jan 14 05:46:13.312000 audit[5856]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.312000 audit: BPF prog-id=252 op=LOAD Jan 14 05:46:13.312000 audit[5856]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.319000 audit: BPF prog-id=252 op=UNLOAD Jan 14 05:46:13.319000 audit[5856]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.319000 audit: BPF prog-id=251 op=UNLOAD Jan 14 05:46:13.319000 audit[5856]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.320000 audit: BPF prog-id=253 op=LOAD Jan 14 05:46:13.320000 audit[5856]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002146e8 a2=98 a3=0 items=0 ppid=5703 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831643336353861376461323661656633616634393934633764313434 Jan 14 05:46:13.424869 containerd[1620]: time="2026-01-14T05:46:13.424745212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w9fc7,Uid:cefaac90-18ac-4910-8420-3803dde0c763,Namespace:kube-system,Attempt:0,} returns sandbox id \"22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a\"" Jan 14 05:46:13.433894 kubelet[2825]: E0114 05:46:13.431849 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:13.486726 containerd[1620]: time="2026-01-14T05:46:13.486688491Z" level=info msg="CreateContainer within sandbox \"22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 05:46:13.502000 audit[5889]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5889 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:13.504558 containerd[1620]: time="2026-01-14T05:46:13.503998742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d668c555c-qwjx8,Uid:e1f153ba-430a-43e5-84a9-e29936603f76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c13266102949e5304b0848bb65c42d925c168b3a764df23c80d70afc8996b39d\"" Jan 14 05:46:13.502000 audit[5889]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffff8c27f80 a2=0 a3=7ffff8c27f6c items=0 ppid=2986 pid=5889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:13.510922 containerd[1620]: time="2026-01-14T05:46:13.510662441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:13.535000 audit[5889]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5889 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:13.535000 audit[5889]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffff8c27f80 a2=0 a3=0 items=0 ppid=2986 pid=5889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:13.559011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375221473.mount: Deactivated successfully. Jan 14 05:46:13.590705 containerd[1620]: time="2026-01-14T05:46:13.590669558Z" level=info msg="Container 489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f: CDI devices from CRI Config.CDIDevices: []" Jan 14 05:46:13.605933 containerd[1620]: time="2026-01-14T05:46:13.605635195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:13.616023 containerd[1620]: time="2026-01-14T05:46:13.615470929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:13.618456 containerd[1620]: time="2026-01-14T05:46:13.618423220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:13.623658 kubelet[2825]: E0114 05:46:13.621725 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:13.623658 kubelet[2825]: E0114 05:46:13.621891 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:13.623658 kubelet[2825]: E0114 05:46:13.621958 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:13.625834 systemd-networkd[1505]: cali9aa4130b243: Gained IPv6LL Jan 14 05:46:13.629778 kubelet[2825]: E0114 05:46:13.629571 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:46:13.657661 containerd[1620]: time="2026-01-14T05:46:13.656523094Z" level=info msg="CreateContainer within sandbox \"22f86e32c578ddc27f2c3ca4774c5ed9bb37e79cb24a8f54b9799d3706a31e6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f\"" Jan 14 05:46:13.661571 containerd[1620]: time="2026-01-14T05:46:13.661546904Z" level=info msg="StartContainer for \"81d3658a7da26aef3af4994c7d1447678f21f84e4cc0c8a011fb4a80bba75037\" returns successfully" Jan 14 05:46:13.674625 containerd[1620]: time="2026-01-14T05:46:13.662561466Z" level=info msg="StartContainer for \"489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f\"" Jan 14 05:46:13.706915 containerd[1620]: time="2026-01-14T05:46:13.706718750Z" level=info msg="connecting to shim 489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f" address="unix:///run/containerd/s/8893fa2a767d070af28b748d5d8afe32638600aed20839b91aa241b54b4955af" protocol=ttrpc version=3 Jan 14 05:46:13.869899 systemd[1]: Started cri-containerd-489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f.scope - libcontainer container 489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f. Jan 14 05:46:13.943000 audit: BPF prog-id=254 op=LOAD Jan 14 05:46:13.947000 audit: BPF prog-id=255 op=LOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:13.947000 audit: BPF prog-id=255 op=UNLOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:13.947000 audit: BPF prog-id=256 op=LOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:13.947000 audit: BPF prog-id=257 op=LOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:13.947000 audit: BPF prog-id=257 op=UNLOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:13.947000 audit: BPF prog-id=256 op=UNLOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:13.947000 audit: BPF prog-id=258 op=LOAD Jan 14 05:46:13.947000 audit[5894]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5799 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:13.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396134613935363333356232663231383063346631663038643032 Jan 14 05:46:14.107860 containerd[1620]: time="2026-01-14T05:46:14.107596600Z" level=info msg="StartContainer for \"489a4a956335b2f2180c4f1f08d02c7a59a55df87d5832078d89138647c98a3f\" returns successfully" Jan 14 05:46:14.171474 kubelet[2825]: E0114 05:46:14.168561 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:46:14.190842 kubelet[2825]: E0114 05:46:14.190761 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:14.201506 kubelet[2825]: E0114 05:46:14.199810 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:14.208838 kubelet[2825]: E0114 05:46:14.208790 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:46:14.403696 kubelet[2825]: I0114 05:46:14.402683 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w9fc7" podStartSLOduration=91.402662335 podStartE2EDuration="1m31.402662335s" podCreationTimestamp="2026-01-14 05:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 05:46:14.336922186 +0000 UTC m=+96.521401707" watchObservedRunningTime="2026-01-14 05:46:14.402662335 +0000 UTC m=+96.587141855" Jan 14 05:46:14.470666 kubelet[2825]: I0114 05:46:14.469965 2825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5h66t" podStartSLOduration=91.4699463 podStartE2EDuration="1m31.4699463s" podCreationTimestamp="2026-01-14 05:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 05:46:14.409643907 +0000 UTC m=+96.594123429" watchObservedRunningTime="2026-01-14 05:46:14.4699463 +0000 UTC m=+96.654425821" Jan 14 05:46:14.785000 audit[5939]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:14.818438 kernel: kauditd_printk_skb: 216 callbacks suppressed Jan 14 05:46:14.818560 kernel: audit: type=1325 audit(1768369574.785:729): table=filter:130 family=2 entries=20 op=nft_register_rule pid=5939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:14.785000 audit[5939]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd3b86e00 a2=0 a3=7ffdd3b86dec items=0 ppid=2986 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:14.890537 kernel: audit: type=1300 audit(1768369574.785:729): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd3b86e00 a2=0 a3=7ffdd3b86dec items=0 ppid=2986 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:14.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:14.918880 kernel: audit: type=1327 audit(1768369574.785:729): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:14.891000 audit[5939]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:14.891000 audit[5939]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd3b86e00 a2=0 a3=0 items=0 ppid=2986 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:15.015665 kernel: audit: type=1325 audit(1768369574.891:730): table=nat:131 family=2 entries=14 op=nft_register_rule pid=5939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:15.015919 kernel: audit: type=1300 audit(1768369574.891:730): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd3b86e00 a2=0 a3=0 items=0 ppid=2986 pid=5939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:14.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:15.047504 kernel: audit: type=1327 audit(1768369574.891:730): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:15.249962 kubelet[2825]: E0114 05:46:15.203829 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:15.249962 kubelet[2825]: E0114 05:46:15.204639 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:15.249962 kubelet[2825]: E0114 05:46:15.208506 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:46:16.129000 audit[5941]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:16.129000 audit[5941]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd75fddc10 a2=0 a3=7ffd75fddbfc items=0 ppid=2986 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:16.186361 kernel: audit: type=1325 audit(1768369576.129:731): table=filter:132 family=2 entries=17 op=nft_register_rule pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:16.186465 kernel: audit: type=1300 audit(1768369576.129:731): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd75fddc10 a2=0 a3=7ffd75fddbfc items=0 ppid=2986 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:16.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:16.202615 kernel: audit: type=1327 audit(1768369576.129:731): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:16.210670 kubelet[2825]: E0114 05:46:16.209411 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:16.210670 kubelet[2825]: E0114 05:46:16.209558 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:16.242000 audit[5941]: NETFILTER_CFG table=nat:133 family=2 entries=47 op=nft_register_chain pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:16.261556 kernel: audit: type=1325 audit(1768369576.242:732): table=nat:133 family=2 entries=47 op=nft_register_chain pid=5941 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:46:16.242000 audit[5941]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd75fddc10 a2=0 a3=7ffd75fddbfc items=0 ppid=2986 pid=5941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:16.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:46:17.214016 kubelet[2825]: E0114 05:46:17.213873 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:19.106543 containerd[1620]: time="2026-01-14T05:46:19.106479788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 05:46:19.199030 containerd[1620]: time="2026-01-14T05:46:19.198974180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:19.201112 containerd[1620]: time="2026-01-14T05:46:19.201036418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 05:46:19.201424 containerd[1620]: time="2026-01-14T05:46:19.201118331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:19.202807 kubelet[2825]: E0114 05:46:19.202674 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:46:19.203581 kubelet[2825]: E0114 05:46:19.202828 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:46:19.203581 kubelet[2825]: E0114 05:46:19.202927 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:19.205891 containerd[1620]: time="2026-01-14T05:46:19.205698911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 05:46:19.270499 containerd[1620]: time="2026-01-14T05:46:19.269930435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:19.272717 containerd[1620]: time="2026-01-14T05:46:19.272445120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 05:46:19.272717 containerd[1620]: time="2026-01-14T05:46:19.272524317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:19.273782 kubelet[2825]: E0114 05:46:19.273567 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:46:19.273782 kubelet[2825]: E0114 05:46:19.273695 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:46:19.273782 kubelet[2825]: E0114 05:46:19.273778 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:19.273957 kubelet[2825]: E0114 05:46:19.273815 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:46:21.106769 containerd[1620]: time="2026-01-14T05:46:21.106724943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 05:46:21.174788 containerd[1620]: time="2026-01-14T05:46:21.174694387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:21.178925 containerd[1620]: time="2026-01-14T05:46:21.178457001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 05:46:21.178925 containerd[1620]: time="2026-01-14T05:46:21.178659961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:21.180022 kubelet[2825]: E0114 05:46:21.179560 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:46:21.180022 kubelet[2825]: E0114 05:46:21.179702 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:46:21.180022 kubelet[2825]: E0114 05:46:21.179776 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:21.183666 containerd[1620]: time="2026-01-14T05:46:21.182908121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 05:46:21.245559 containerd[1620]: time="2026-01-14T05:46:21.245151938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:21.640758 containerd[1620]: time="2026-01-14T05:46:21.640639269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 05:46:21.640915 containerd[1620]: time="2026-01-14T05:46:21.640761403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:21.641755 kubelet[2825]: E0114 05:46:21.641330 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:46:21.641755 kubelet[2825]: E0114 05:46:21.641384 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:46:21.642330 kubelet[2825]: E0114 05:46:21.642081 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:21.642330 kubelet[2825]: E0114 05:46:21.642130 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:46:23.104457 containerd[1620]: time="2026-01-14T05:46:23.103585524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 05:46:23.171849 containerd[1620]: time="2026-01-14T05:46:23.171548028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:23.175024 containerd[1620]: time="2026-01-14T05:46:23.174950104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 05:46:23.175653 containerd[1620]: time="2026-01-14T05:46:23.175072238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:23.177466 kubelet[2825]: E0114 05:46:23.176940 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:46:23.177466 kubelet[2825]: E0114 05:46:23.176982 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:46:23.177466 kubelet[2825]: E0114 05:46:23.177048 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:23.177466 kubelet[2825]: E0114 05:46:23.177077 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:46:24.111106 containerd[1620]: time="2026-01-14T05:46:24.111062867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:24.179961 containerd[1620]: time="2026-01-14T05:46:24.179908308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:24.182891 containerd[1620]: time="2026-01-14T05:46:24.182779719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:24.184149 kubelet[2825]: E0114 05:46:24.184027 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:24.184149 kubelet[2825]: E0114 05:46:24.184081 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:24.185788 kubelet[2825]: E0114 05:46:24.184477 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:24.185788 kubelet[2825]: E0114 05:46:24.184513 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:46:24.208827 containerd[1620]: time="2026-01-14T05:46:24.182920713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:26.110626 containerd[1620]: time="2026-01-14T05:46:26.110577777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 05:46:26.174501 containerd[1620]: time="2026-01-14T05:46:26.173828653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:26.177890 containerd[1620]: time="2026-01-14T05:46:26.177536163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 05:46:26.177890 containerd[1620]: time="2026-01-14T05:46:26.177599581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:26.177972 kubelet[2825]: E0114 05:46:26.177748 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:46:26.177972 kubelet[2825]: E0114 05:46:26.177799 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:46:26.177972 kubelet[2825]: E0114 05:46:26.177886 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:26.177972 kubelet[2825]: E0114 05:46:26.177930 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:46:28.113638 containerd[1620]: time="2026-01-14T05:46:28.112711678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:28.196770 containerd[1620]: time="2026-01-14T05:46:28.195795676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:28.199763 containerd[1620]: time="2026-01-14T05:46:28.199616935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:28.199763 containerd[1620]: time="2026-01-14T05:46:28.199707223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:28.200756 kubelet[2825]: E0114 05:46:28.200032 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:28.200756 kubelet[2825]: E0114 05:46:28.200150 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:28.201647 kubelet[2825]: E0114 05:46:28.200802 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:28.201647 kubelet[2825]: E0114 05:46:28.200834 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:46:28.201742 containerd[1620]: time="2026-01-14T05:46:28.201693118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:28.277904 containerd[1620]: time="2026-01-14T05:46:28.277720844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:28.281141 containerd[1620]: time="2026-01-14T05:46:28.281111857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:28.281551 containerd[1620]: time="2026-01-14T05:46:28.281341293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:28.283545 kubelet[2825]: E0114 05:46:28.282023 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:28.283545 kubelet[2825]: E0114 05:46:28.282134 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:28.283545 kubelet[2825]: E0114 05:46:28.282602 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:28.283545 kubelet[2825]: E0114 05:46:28.282633 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:46:28.798445 kubelet[2825]: E0114 05:46:28.796688 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:33.105038 kubelet[2825]: E0114 05:46:33.104968 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:46:36.113384 kubelet[2825]: E0114 05:46:36.112068 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:46:45.180720 containerd[1620]: time="2026-01-14T05:46:45.156014819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 05:46:45.416296 kubelet[2825]: E0114 05:46:45.414964 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:46:45.441076 kubelet[2825]: E0114 05:46:45.439482 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:46:45.441076 kubelet[2825]: E0114 05:46:45.439610 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:46:45.441076 kubelet[2825]: E0114 05:46:45.439776 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:46:45.590328 containerd[1620]: time="2026-01-14T05:46:45.588959352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:45.615569 containerd[1620]: time="2026-01-14T05:46:45.614636184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 05:46:45.619552 containerd[1620]: time="2026-01-14T05:46:45.619530614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:45.619967 kubelet[2825]: E0114 05:46:45.619929 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:46:45.620073 kubelet[2825]: E0114 05:46:45.620057 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:46:45.626320 kubelet[2825]: E0114 05:46:45.620756 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:45.626763 containerd[1620]: time="2026-01-14T05:46:45.625912094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:45.720666 containerd[1620]: time="2026-01-14T05:46:45.719466645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:45.721809 containerd[1620]: time="2026-01-14T05:46:45.721641372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:45.721809 containerd[1620]: time="2026-01-14T05:46:45.721712705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:45.722037 kubelet[2825]: E0114 05:46:45.721841 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:45.722331 kubelet[2825]: E0114 05:46:45.722051 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:45.723562 containerd[1620]: time="2026-01-14T05:46:45.723312505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 05:46:45.726881 kubelet[2825]: E0114 05:46:45.726480 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:45.726881 kubelet[2825]: E0114 05:46:45.726619 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:46:45.829913 containerd[1620]: time="2026-01-14T05:46:45.829697982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:45.836548 containerd[1620]: time="2026-01-14T05:46:45.836471991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 05:46:45.836974 containerd[1620]: time="2026-01-14T05:46:45.836806857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:45.837835 kubelet[2825]: E0114 05:46:45.837671 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:46:45.838844 kubelet[2825]: E0114 05:46:45.838121 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:46:45.840514 kubelet[2825]: E0114 05:46:45.839329 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:45.840514 kubelet[2825]: E0114 05:46:45.839717 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:46:47.105775 containerd[1620]: time="2026-01-14T05:46:47.105609909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 05:46:47.186374 containerd[1620]: time="2026-01-14T05:46:47.186047753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:47.191350 containerd[1620]: time="2026-01-14T05:46:47.190327022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:47.191350 containerd[1620]: time="2026-01-14T05:46:47.190362399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 05:46:47.191989 kubelet[2825]: E0114 05:46:47.191856 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:46:47.192831 kubelet[2825]: E0114 05:46:47.191994 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:46:47.192831 kubelet[2825]: E0114 05:46:47.192332 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:47.199332 containerd[1620]: time="2026-01-14T05:46:47.198347897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 05:46:47.289556 containerd[1620]: time="2026-01-14T05:46:47.288023330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:47.291863 containerd[1620]: time="2026-01-14T05:46:47.291745639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 05:46:47.291863 containerd[1620]: time="2026-01-14T05:46:47.291823625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:47.296008 kubelet[2825]: E0114 05:46:47.295588 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:46:47.296008 kubelet[2825]: E0114 05:46:47.295639 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:46:47.296008 kubelet[2825]: E0114 05:46:47.295715 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:47.296008 kubelet[2825]: E0114 05:46:47.295754 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:46:49.094960 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:50190.service - OpenSSH per-connection server daemon (10.0.0.1:50190). Jan 14 05:46:49.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.28:22-10.0.0.1:50190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:46:49.104593 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 14 05:46:49.104905 kernel: audit: type=1130 audit(1768369609.095:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.28:22-10.0.0.1:50190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:46:49.122371 containerd[1620]: time="2026-01-14T05:46:49.121558882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 05:46:49.220266 containerd[1620]: time="2026-01-14T05:46:49.217622279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:49.238643 containerd[1620]: time="2026-01-14T05:46:49.232534284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 05:46:49.238643 containerd[1620]: time="2026-01-14T05:46:49.232656583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:49.239014 kubelet[2825]: E0114 05:46:49.233901 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:46:49.239014 kubelet[2825]: E0114 05:46:49.233959 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:46:49.239014 kubelet[2825]: E0114 05:46:49.234051 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:49.239014 kubelet[2825]: E0114 05:46:49.234095 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:46:49.504000 audit[6005]: USER_ACCT pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.506757 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 50190 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:46:49.510035 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:46:49.536719 systemd-logind[1596]: New session 9 of user core. Jan 14 05:46:49.545343 kernel: audit: type=1101 audit(1768369609.504:734): pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.507000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.579707 kernel: audit: type=1103 audit(1768369609.507:735): pid=6005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.579735 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 05:46:49.507000 audit[6005]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3941c6d0 a2=3 a3=0 items=0 ppid=1 pid=6005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:49.636576 kernel: audit: type=1006 audit(1768369609.507:736): pid=6005 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 05:46:49.636701 kernel: audit: type=1300 audit(1768369609.507:736): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3941c6d0 a2=3 a3=0 items=0 ppid=1 pid=6005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:49.507000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:46:49.655377 kernel: audit: type=1327 audit(1768369609.507:736): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:46:49.655672 kernel: audit: type=1105 audit(1768369609.603:737): pid=6005 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.603000 audit[6005]: USER_START pid=6005 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.612000 audit[6012]: CRED_ACQ pid=6012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:49.743784 kernel: audit: type=1103 audit(1768369609.612:738): pid=6012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:50.242855 sshd[6012]: Connection closed by 10.0.0.1 port 50190 Jan 14 05:46:50.247532 sshd-session[6005]: pam_unix(sshd:session): session closed for user core Jan 14 05:46:50.255000 audit[6005]: USER_END pid=6005 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:50.299062 kernel: audit: type=1106 audit(1768369610.255:739): pid=6005 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:50.299918 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:50190.service: Deactivated successfully. Jan 14 05:46:50.256000 audit[6005]: CRED_DISP pid=6005 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:50.305520 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 05:46:50.308834 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. Jan 14 05:46:50.310911 systemd-logind[1596]: Removed session 9. Jan 14 05:46:50.328649 kernel: audit: type=1104 audit(1768369610.256:740): pid=6005 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:50.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.28:22-10.0.0.1:50190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:46:55.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.28:22-10.0.0.1:38638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:46:55.267478 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:38638.service - OpenSSH per-connection server daemon (10.0.0.1:38638). Jan 14 05:46:55.293671 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:46:55.293888 kernel: audit: type=1130 audit(1768369615.266:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.28:22-10.0.0.1:38638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:46:55.477000 audit[6037]: USER_ACCT pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.482594 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:46:55.484081 sshd[6037]: Accepted publickey for core from 10.0.0.1 port 38638 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:46:55.497944 systemd-logind[1596]: New session 10 of user core. Jan 14 05:46:55.479000 audit[6037]: CRED_ACQ pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.549353 kernel: audit: type=1101 audit(1768369615.477:743): pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.549451 kernel: audit: type=1103 audit(1768369615.479:744): pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.552415 kernel: audit: type=1006 audit(1768369615.479:745): pid=6037 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 05:46:55.551104 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 05:46:55.479000 audit[6037]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6a4851c0 a2=3 a3=0 items=0 ppid=1 pid=6037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:55.619436 kernel: audit: type=1300 audit(1768369615.479:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6a4851c0 a2=3 a3=0 items=0 ppid=1 pid=6037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:46:55.479000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:46:55.644936 kernel: audit: type=1327 audit(1768369615.479:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:46:55.645134 kernel: audit: type=1105 audit(1768369615.561:746): pid=6037 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.561000 audit[6037]: USER_START pid=6037 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.566000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.719892 kernel: audit: type=1103 audit(1768369615.566:747): pid=6041 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.877724 sshd[6041]: Connection closed by 10.0.0.1 port 38638 Jan 14 05:46:55.882118 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Jan 14 05:46:55.889000 audit[6037]: USER_END pid=6037 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.896859 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:38638.service: Deactivated successfully. Jan 14 05:46:55.902696 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 05:46:55.912803 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. Jan 14 05:46:55.915443 systemd-logind[1596]: Removed session 10. Jan 14 05:46:55.926654 kernel: audit: type=1106 audit(1768369615.889:748): pid=6037 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.926731 kernel: audit: type=1104 audit(1768369615.890:749): pid=6037 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.890000 audit[6037]: CRED_DISP pid=6037 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:46:55.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.28:22-10.0.0.1:38638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:46:57.104790 containerd[1620]: time="2026-01-14T05:46:57.104074433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 05:46:57.181354 containerd[1620]: time="2026-01-14T05:46:57.180368888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:57.185758 containerd[1620]: time="2026-01-14T05:46:57.185625634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 05:46:57.185871 containerd[1620]: time="2026-01-14T05:46:57.185794619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:57.186752 kubelet[2825]: E0114 05:46:57.186441 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:46:57.186752 kubelet[2825]: E0114 05:46:57.186671 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:46:57.187489 kubelet[2825]: E0114 05:46:57.186772 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:57.187489 kubelet[2825]: E0114 05:46:57.186814 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:46:58.108071 containerd[1620]: time="2026-01-14T05:46:58.107936091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:58.191929 containerd[1620]: time="2026-01-14T05:46:58.191121552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:58.198482 containerd[1620]: time="2026-01-14T05:46:58.198051114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:58.199516 containerd[1620]: time="2026-01-14T05:46:58.199154808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:58.202115 kubelet[2825]: E0114 05:46:58.200093 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:58.202115 kubelet[2825]: E0114 05:46:58.201328 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:58.202115 kubelet[2825]: E0114 05:46:58.201688 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:58.202115 kubelet[2825]: E0114 05:46:58.201737 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:46:58.208455 containerd[1620]: time="2026-01-14T05:46:58.205799068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:46:58.311847 containerd[1620]: time="2026-01-14T05:46:58.311511684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:46:58.315835 containerd[1620]: time="2026-01-14T05:46:58.315360009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:46:58.315835 containerd[1620]: time="2026-01-14T05:46:58.315485524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:46:58.316298 kubelet[2825]: E0114 05:46:58.315952 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:58.316298 kubelet[2825]: E0114 05:46:58.316003 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:46:58.316376 kubelet[2825]: E0114 05:46:58.316357 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:46:58.316413 kubelet[2825]: E0114 05:46:58.316399 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:46:59.106668 kubelet[2825]: E0114 05:46:59.106405 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:47:00.102013 kubelet[2825]: E0114 05:47:00.101890 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:00.119037 kubelet[2825]: E0114 05:47:00.118533 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:47:00.902502 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:38654.service - OpenSSH per-connection server daemon (10.0.0.1:38654). Jan 14 05:47:00.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.28:22-10.0.0.1:38654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:00.940711 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:00.940929 kernel: audit: type=1130 audit(1768369620.902:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.28:22-10.0.0.1:38654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:01.079999 sshd[6083]: Accepted publickey for core from 10.0.0.1 port 38654 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:01.078000 audit[6083]: USER_ACCT pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.089757 sshd-session[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:01.106716 systemd-logind[1596]: New session 11 of user core. Jan 14 05:47:01.112394 kernel: audit: type=1101 audit(1768369621.078:752): pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.086000 audit[6083]: CRED_ACQ pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.130757 kubelet[2825]: E0114 05:47:01.130715 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:47:01.131405 kubelet[2825]: E0114 05:47:01.131060 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:47:01.145300 kernel: audit: type=1103 audit(1768369621.086:753): pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.147641 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 05:47:01.168004 kernel: audit: type=1006 audit(1768369621.086:754): pid=6083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 05:47:01.086000 audit[6083]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe32fa8390 a2=3 a3=0 items=0 ppid=1 pid=6083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:01.202764 kernel: audit: type=1300 audit(1768369621.086:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe32fa8390 a2=3 a3=0 items=0 ppid=1 pid=6083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:01.202877 kernel: audit: type=1327 audit(1768369621.086:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:01.086000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:01.158000 audit[6083]: USER_START pid=6083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.247029 kernel: audit: type=1105 audit(1768369621.158:755): pid=6083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.175000 audit[6087]: CRED_ACQ pid=6087 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.273413 kernel: audit: type=1103 audit(1768369621.175:756): pid=6087 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.509346 sshd[6087]: Connection closed by 10.0.0.1 port 38654 Jan 14 05:47:01.511861 sshd-session[6083]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:01.516000 audit[6083]: USER_END pid=6083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.523017 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:38654.service: Deactivated successfully. Jan 14 05:47:01.529104 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 05:47:01.533106 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. Jan 14 05:47:01.537072 systemd-logind[1596]: Removed session 11. Jan 14 05:47:01.566468 kernel: audit: type=1106 audit(1768369621.516:757): pid=6083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.516000 audit[6083]: CRED_DISP pid=6083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.600342 kernel: audit: type=1104 audit(1768369621.516:758): pid=6083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:01.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.28:22-10.0.0.1:38654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:06.527979 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:50588.service - OpenSSH per-connection server daemon (10.0.0.1:50588). Jan 14 05:47:06.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.28:22-10.0.0.1:50588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:06.541359 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:06.541418 kernel: audit: type=1130 audit(1768369626.526:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.28:22-10.0.0.1:50588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:06.648000 audit[6103]: USER_ACCT pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.651085 sshd[6103]: Accepted publickey for core from 10.0.0.1 port 50588 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:06.654979 sshd-session[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:06.666902 systemd-logind[1596]: New session 12 of user core. Jan 14 05:47:06.651000 audit[6103]: CRED_ACQ pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.709086 kernel: audit: type=1101 audit(1768369626.648:761): pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.709451 kernel: audit: type=1103 audit(1768369626.651:762): pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.709473 kernel: audit: type=1006 audit(1768369626.651:763): pid=6103 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 05:47:06.725345 kernel: audit: type=1300 audit(1768369626.651:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaec6a340 a2=3 a3=0 items=0 ppid=1 pid=6103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:06.651000 audit[6103]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaec6a340 a2=3 a3=0 items=0 ppid=1 pid=6103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:06.753998 kernel: audit: type=1327 audit(1768369626.651:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:06.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:06.767422 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 05:47:06.774000 audit[6103]: USER_START pid=6103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.808594 kernel: audit: type=1105 audit(1768369626.774:764): pid=6103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.811000 audit[6107]: CRED_ACQ pid=6107 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:06.839988 kernel: audit: type=1103 audit(1768369626.811:765): pid=6107 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:07.021012 sshd[6107]: Connection closed by 10.0.0.1 port 50588 Jan 14 05:47:07.022526 sshd-session[6103]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:07.025000 audit[6103]: USER_END pid=6103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:07.041094 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:50588.service: Deactivated successfully. Jan 14 05:47:07.049471 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 05:47:07.054459 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. Jan 14 05:47:07.058541 systemd-logind[1596]: Removed session 12. Jan 14 05:47:07.073520 kernel: audit: type=1106 audit(1768369627.025:766): pid=6103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:07.073733 kernel: audit: type=1104 audit(1768369627.025:767): pid=6103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:07.025000 audit[6103]: CRED_DISP pid=6103 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:07.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.28:22-10.0.0.1:50588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:10.105497 kubelet[2825]: E0114 05:47:10.105444 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:47:10.119551 kubelet[2825]: E0114 05:47:10.119147 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:47:12.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.28:22-10.0.0.1:50602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:12.040897 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:50602.service - OpenSSH per-connection server daemon (10.0.0.1:50602). Jan 14 05:47:12.053327 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:12.053372 kernel: audit: type=1130 audit(1768369632.039:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.28:22-10.0.0.1:50602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:12.118355 kubelet[2825]: E0114 05:47:12.117394 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:47:12.130328 kubelet[2825]: E0114 05:47:12.128638 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:47:12.182000 audit[6121]: USER_ACCT pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.197589 systemd-logind[1596]: New session 13 of user core. Jan 14 05:47:12.189145 sshd-session[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:12.205776 sshd[6121]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:12.184000 audit[6121]: CRED_ACQ pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.250105 kernel: audit: type=1101 audit(1768369632.182:770): pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.250477 kernel: audit: type=1103 audit(1768369632.184:771): pid=6121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.250504 kernel: audit: type=1006 audit(1768369632.184:772): pid=6121 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 05:47:12.184000 audit[6121]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5fb6870 a2=3 a3=0 items=0 ppid=1 pid=6121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:12.304509 kernel: audit: type=1300 audit(1768369632.184:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5fb6870 a2=3 a3=0 items=0 ppid=1 pid=6121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:12.273604 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 05:47:12.184000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:12.317537 kernel: audit: type=1327 audit(1768369632.184:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:12.281000 audit[6121]: USER_START pid=6121 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.286000 audit[6125]: CRED_ACQ pid=6125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.385496 kernel: audit: type=1105 audit(1768369632.281:773): pid=6121 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.385588 kernel: audit: type=1103 audit(1768369632.286:774): pid=6125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.570594 sshd[6125]: Connection closed by 10.0.0.1 port 50602 Jan 14 05:47:12.571362 sshd-session[6121]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:12.575000 audit[6121]: USER_END pid=6121 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.582436 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:50602.service: Deactivated successfully. Jan 14 05:47:12.589991 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 05:47:12.597832 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. Jan 14 05:47:12.601332 systemd-logind[1596]: Removed session 13. Jan 14 05:47:12.618451 kernel: audit: type=1106 audit(1768369632.575:775): pid=6121 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.618608 kernel: audit: type=1104 audit(1768369632.575:776): pid=6121 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.575000 audit[6121]: CRED_DISP pid=6121 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:12.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.28:22-10.0.0.1:50602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:13.105855 kubelet[2825]: E0114 05:47:13.105061 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:47:15.106659 kubelet[2825]: E0114 05:47:15.106508 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:47:16.111562 kubelet[2825]: E0114 05:47:16.109649 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:47:17.600308 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:17.600411 kernel: audit: type=1130 audit(1768369637.588:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.28:22-10.0.0.1:39654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:17.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.28:22-10.0.0.1:39654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:17.588677 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:39654.service - OpenSSH per-connection server daemon (10.0.0.1:39654). Jan 14 05:47:17.711000 audit[6142]: USER_ACCT pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.713993 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 39654 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:17.717357 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:17.734356 systemd-logind[1596]: New session 14 of user core. Jan 14 05:47:17.713000 audit[6142]: CRED_ACQ pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.772348 kernel: audit: type=1101 audit(1768369637.711:779): pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.772600 kernel: audit: type=1103 audit(1768369637.713:780): pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.776885 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 05:47:17.793388 kernel: audit: type=1006 audit(1768369637.714:781): pid=6142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 05:47:17.714000 audit[6142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd01480600 a2=3 a3=0 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:17.714000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:17.842591 kernel: audit: type=1300 audit(1768369637.714:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd01480600 a2=3 a3=0 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:17.842968 kernel: audit: type=1327 audit(1768369637.714:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:17.843842 kernel: audit: type=1105 audit(1768369637.787:782): pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.787000 audit[6142]: USER_START pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.798000 audit[6146]: CRED_ACQ pid=6146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:17.909607 kernel: audit: type=1103 audit(1768369637.798:783): pid=6146 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:18.056829 sshd[6146]: Connection closed by 10.0.0.1 port 39654 Jan 14 05:47:18.057553 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:18.059000 audit[6142]: USER_END pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:18.100536 kernel: audit: type=1106 audit(1768369638.059:784): pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:18.103876 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:39654.service: Deactivated successfully. Jan 14 05:47:18.064000 audit[6142]: CRED_DISP pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:18.107089 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 05:47:18.109826 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. Jan 14 05:47:18.112891 systemd-logind[1596]: Removed session 14. Jan 14 05:47:18.132632 kernel: audit: type=1104 audit(1768369638.064:785): pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:18.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.28:22-10.0.0.1:39654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:20.101660 kubelet[2825]: E0114 05:47:20.101439 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:22.114047 kubelet[2825]: E0114 05:47:22.113528 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:47:23.078476 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Jan 14 05:47:23.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.28:22-10.0.0.1:39662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:23.085880 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:23.085967 kernel: audit: type=1130 audit(1768369643.077:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.28:22-10.0.0.1:39662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:23.108095 kubelet[2825]: E0114 05:47:23.107940 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:47:23.212000 audit[6160]: USER_ACCT pid=6160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.215492 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:23.218403 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:23.232996 systemd-logind[1596]: New session 15 of user core. Jan 14 05:47:23.215000 audit[6160]: CRED_ACQ pid=6160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.272398 kernel: audit: type=1101 audit(1768369643.212:788): pid=6160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.272487 kernel: audit: type=1103 audit(1768369643.215:789): pid=6160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.272517 kernel: audit: type=1006 audit(1768369643.215:790): pid=6160 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 05:47:23.215000 audit[6160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecedcf650 a2=3 a3=0 items=0 ppid=1 pid=6160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:23.290551 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 05:47:23.326536 kernel: audit: type=1300 audit(1768369643.215:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecedcf650 a2=3 a3=0 items=0 ppid=1 pid=6160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:23.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:23.343625 kernel: audit: type=1327 audit(1768369643.215:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:23.300000 audit[6160]: USER_START pid=6160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.327000 audit[6164]: CRED_ACQ pid=6164 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.416142 kernel: audit: type=1105 audit(1768369643.300:791): pid=6160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.416355 kernel: audit: type=1103 audit(1768369643.327:792): pid=6164 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.552009 sshd[6164]: Connection closed by 10.0.0.1 port 39662 Jan 14 05:47:23.552505 sshd-session[6160]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:23.556000 audit[6160]: USER_END pid=6160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.564391 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. Jan 14 05:47:23.566125 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:39662.service: Deactivated successfully. Jan 14 05:47:23.569537 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 05:47:23.572652 systemd-logind[1596]: Removed session 15. Jan 14 05:47:23.596755 kernel: audit: type=1106 audit(1768369643.556:793): pid=6160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.596907 kernel: audit: type=1104 audit(1768369643.556:794): pid=6160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.556000 audit[6160]: CRED_DISP pid=6160 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:23.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.28:22-10.0.0.1:39662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:26.103376 kubelet[2825]: E0114 05:47:26.102067 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:47:27.106106 kubelet[2825]: E0114 05:47:27.105767 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:47:27.108634 containerd[1620]: time="2026-01-14T05:47:27.108494375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 05:47:27.183144 containerd[1620]: time="2026-01-14T05:47:27.182953462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:27.188704 containerd[1620]: time="2026-01-14T05:47:27.187977972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 05:47:27.188704 containerd[1620]: time="2026-01-14T05:47:27.188664075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:27.192073 kubelet[2825]: E0114 05:47:27.191610 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:47:27.192073 kubelet[2825]: E0114 05:47:27.191741 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 05:47:27.192073 kubelet[2825]: E0114 05:47:27.191808 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:27.199388 containerd[1620]: time="2026-01-14T05:47:27.198955338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 05:47:27.280593 containerd[1620]: time="2026-01-14T05:47:27.280130399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:27.283136 containerd[1620]: time="2026-01-14T05:47:27.282736491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 05:47:27.283136 containerd[1620]: time="2026-01-14T05:47:27.282928040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:27.283546 kubelet[2825]: E0114 05:47:27.283409 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:47:27.283546 kubelet[2825]: E0114 05:47:27.283539 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 05:47:27.283732 kubelet[2825]: E0114 05:47:27.283647 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-46w2k_calico-system(2b560ec8-f090-4614-a1d5-13a4bc0ce8dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:27.283732 kubelet[2825]: E0114 05:47:27.283689 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:47:28.113573 containerd[1620]: time="2026-01-14T05:47:28.110978488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:47:28.209660 containerd[1620]: time="2026-01-14T05:47:28.209532698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:28.213092 containerd[1620]: time="2026-01-14T05:47:28.212698985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:47:28.213092 containerd[1620]: time="2026-01-14T05:47:28.212933353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:28.213750 kubelet[2825]: E0114 05:47:28.213066 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:47:28.213750 kubelet[2825]: E0114 05:47:28.213106 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:47:28.213750 kubelet[2825]: E0114 05:47:28.213340 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-nrl4m_calico-apiserver(f8e3b291-7413-4398-b3ac-57e03796db9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:28.213750 kubelet[2825]: E0114 05:47:28.213371 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:47:28.601421 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:28.601530 kernel: audit: type=1130 audit(1768369648.570:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.28:22-10.0.0.1:39242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:28.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.28:22-10.0.0.1:39242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:28.571374 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:39242.service - OpenSSH per-connection server daemon (10.0.0.1:39242). Jan 14 05:47:28.702000 audit[6184]: USER_ACCT pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.704429 sshd[6184]: Accepted publickey for core from 10.0.0.1 port 39242 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:28.711553 sshd-session[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:28.730710 systemd-logind[1596]: New session 16 of user core. Jan 14 05:47:28.735457 kernel: audit: type=1101 audit(1768369648.702:797): pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.708000 audit[6184]: CRED_ACQ pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.738661 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 05:47:28.770515 kernel: audit: type=1103 audit(1768369648.708:798): pid=6184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.790076 kernel: audit: type=1006 audit(1768369648.708:799): pid=6184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 05:47:28.708000 audit[6184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf1491840 a2=3 a3=0 items=0 ppid=1 pid=6184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:28.708000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:28.836509 kernel: audit: type=1300 audit(1768369648.708:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf1491840 a2=3 a3=0 items=0 ppid=1 pid=6184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:28.836574 kernel: audit: type=1327 audit(1768369648.708:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:28.836681 kernel: audit: type=1105 audit(1768369648.749:800): pid=6184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.749000 audit[6184]: USER_START pid=6184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.787000 audit[6207]: CRED_ACQ pid=6207 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:28.907493 kernel: audit: type=1103 audit(1768369648.787:801): pid=6207 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:29.101734 kubelet[2825]: E0114 05:47:29.101655 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:29.173341 sshd[6207]: Connection closed by 10.0.0.1 port 39242 Jan 14 05:47:29.172888 sshd-session[6184]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:29.177000 audit[6184]: USER_END pid=6184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:29.181645 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. Jan 14 05:47:29.183680 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:39242.service: Deactivated successfully. Jan 14 05:47:29.189383 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 05:47:29.191995 systemd-logind[1596]: Removed session 16. Jan 14 05:47:29.177000 audit[6184]: CRED_DISP pid=6184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:29.243972 kernel: audit: type=1106 audit(1768369649.177:802): pid=6184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:29.244107 kernel: audit: type=1104 audit(1768369649.177:803): pid=6184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:29.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.28:22-10.0.0.1:39242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:30.106063 containerd[1620]: time="2026-01-14T05:47:30.105282395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 05:47:30.175859 containerd[1620]: time="2026-01-14T05:47:30.175432015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:30.180490 containerd[1620]: time="2026-01-14T05:47:30.180453712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 05:47:30.180611 containerd[1620]: time="2026-01-14T05:47:30.180597114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:30.181860 kubelet[2825]: E0114 05:47:30.181831 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:47:30.183816 kubelet[2825]: E0114 05:47:30.183620 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 05:47:30.183977 kubelet[2825]: E0114 05:47:30.183878 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:30.188014 containerd[1620]: time="2026-01-14T05:47:30.187961592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 05:47:30.261589 containerd[1620]: time="2026-01-14T05:47:30.261336157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:30.267027 containerd[1620]: time="2026-01-14T05:47:30.266140573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 05:47:30.268093 kubelet[2825]: E0114 05:47:30.267527 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:47:30.268403 containerd[1620]: time="2026-01-14T05:47:30.267910638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:30.268466 kubelet[2825]: E0114 05:47:30.267707 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 05:47:30.268516 kubelet[2825]: E0114 05:47:30.268463 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9f97fd46d-kvn2d_calico-system(a883e1fb-a961-4974-a5a0-9481f730a55a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:30.268553 kubelet[2825]: E0114 05:47:30.268521 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:47:34.102070 kubelet[2825]: E0114 05:47:34.101977 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:34.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.28:22-10.0.0.1:39248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:34.196980 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:39248.service - OpenSSH per-connection server daemon (10.0.0.1:39248). Jan 14 05:47:34.231137 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:34.231365 kernel: audit: type=1130 audit(1768369654.197:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.28:22-10.0.0.1:39248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:34.343000 audit[6232]: USER_ACCT pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.344878 sshd[6232]: Accepted publickey for core from 10.0.0.1 port 39248 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:34.349540 sshd-session[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:34.362031 systemd-logind[1596]: New session 17 of user core. Jan 14 05:47:34.346000 audit[6232]: CRED_ACQ pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.411798 kernel: audit: type=1101 audit(1768369654.343:806): pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.411983 kernel: audit: type=1103 audit(1768369654.346:807): pid=6232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.412011 kernel: audit: type=1006 audit(1768369654.346:808): pid=6232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 05:47:34.346000 audit[6232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1c398de0 a2=3 a3=0 items=0 ppid=1 pid=6232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:34.433407 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 05:47:34.346000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:34.482884 kernel: audit: type=1300 audit(1768369654.346:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1c398de0 a2=3 a3=0 items=0 ppid=1 pid=6232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:34.482933 kernel: audit: type=1327 audit(1768369654.346:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:34.440000 audit[6232]: USER_START pid=6232 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.522908 kernel: audit: type=1105 audit(1768369654.440:809): pid=6232 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.523393 kernel: audit: type=1103 audit(1768369654.446:810): pid=6236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.446000 audit[6236]: CRED_ACQ pid=6236 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.746513 sshd[6236]: Connection closed by 10.0.0.1 port 39248 Jan 14 05:47:34.749909 sshd-session[6232]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:34.752000 audit[6232]: USER_END pid=6232 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.752000 audit[6232]: CRED_DISP pid=6232 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.795113 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:39248.service: Deactivated successfully. Jan 14 05:47:34.800428 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 05:47:34.802569 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. Jan 14 05:47:34.812533 kernel: audit: type=1106 audit(1768369654.752:811): pid=6232 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.812702 kernel: audit: type=1104 audit(1768369654.752:812): pid=6232 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.28:22-10.0.0.1:39248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:34.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.28:22-10.0.0.1:57868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:34.812883 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:57868.service - OpenSSH per-connection server daemon (10.0.0.1:57868). Jan 14 05:47:34.815740 systemd-logind[1596]: Removed session 17. Jan 14 05:47:34.909000 audit[6251]: USER_ACCT pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.909927 sshd[6251]: Accepted publickey for core from 10.0.0.1 port 57868 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:34.910000 audit[6251]: CRED_ACQ pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.911000 audit[6251]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb6cc1ba0 a2=3 a3=0 items=0 ppid=1 pid=6251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:34.911000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:34.912882 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:34.922053 systemd-logind[1596]: New session 18 of user core. Jan 14 05:47:34.933874 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 05:47:34.941000 audit[6251]: USER_START pid=6251 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:34.946000 audit[6255]: CRED_ACQ pid=6255 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.105332 kubelet[2825]: E0114 05:47:35.104707 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:47:35.315448 sshd[6255]: Connection closed by 10.0.0.1 port 57868 Jan 14 05:47:35.314955 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:35.316000 audit[6251]: USER_END pid=6251 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.317000 audit[6251]: CRED_DISP pid=6251 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.335139 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:57868.service: Deactivated successfully. Jan 14 05:47:35.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.28:22-10.0.0.1:57868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:35.341922 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 05:47:35.345780 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. Jan 14 05:47:35.351827 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:57878.service - OpenSSH per-connection server daemon (10.0.0.1:57878). Jan 14 05:47:35.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.28:22-10.0.0.1:57878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:35.353948 systemd-logind[1596]: Removed session 18. Jan 14 05:47:35.516000 audit[6267]: USER_ACCT pid=6267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.517727 sshd[6267]: Accepted publickey for core from 10.0.0.1 port 57878 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:35.519000 audit[6267]: CRED_ACQ pid=6267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.519000 audit[6267]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdacf6db30 a2=3 a3=0 items=0 ppid=1 pid=6267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:35.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:35.521750 sshd-session[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:35.535686 systemd-logind[1596]: New session 19 of user core. Jan 14 05:47:35.544948 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 05:47:35.553000 audit[6267]: USER_START pid=6267 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.557000 audit[6271]: CRED_ACQ pid=6271 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.761472 sshd[6271]: Connection closed by 10.0.0.1 port 57878 Jan 14 05:47:35.762061 sshd-session[6267]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:35.765000 audit[6267]: USER_END pid=6267 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.765000 audit[6267]: CRED_DISP pid=6267 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:35.773825 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:57878.service: Deactivated successfully. Jan 14 05:47:35.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.28:22-10.0.0.1:57878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:35.778890 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 05:47:35.781890 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. Jan 14 05:47:35.784829 systemd-logind[1596]: Removed session 19. Jan 14 05:47:36.104499 kubelet[2825]: E0114 05:47:36.104430 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:47:37.104736 kubelet[2825]: E0114 05:47:37.104121 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:38.105355 containerd[1620]: time="2026-01-14T05:47:38.104426524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 05:47:38.110943 kubelet[2825]: E0114 05:47:38.110818 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:38.262895 containerd[1620]: time="2026-01-14T05:47:38.262066495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:38.266872 containerd[1620]: time="2026-01-14T05:47:38.266648845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 05:47:38.266872 containerd[1620]: time="2026-01-14T05:47:38.266756995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:38.268396 kubelet[2825]: E0114 05:47:38.267970 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:47:38.268396 kubelet[2825]: E0114 05:47:38.268087 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 05:47:38.269485 kubelet[2825]: E0114 05:47:38.268655 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h4bdc_calico-system(1cbfb118-b594-42d6-be3d-0e1840e8dae4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:38.270393 kubelet[2825]: E0114 05:47:38.269909 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:47:39.104999 containerd[1620]: time="2026-01-14T05:47:39.104860233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 05:47:39.260828 containerd[1620]: time="2026-01-14T05:47:39.260715510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:39.264905 containerd[1620]: time="2026-01-14T05:47:39.264748049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 05:47:39.264905 containerd[1620]: time="2026-01-14T05:47:39.264857481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:39.266421 kubelet[2825]: E0114 05:47:39.265975 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:47:39.266421 kubelet[2825]: E0114 05:47:39.266132 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 05:47:39.266891 kubelet[2825]: E0114 05:47:39.266596 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85449f874f-xn2d4_calico-system(64a69192-713c-418d-907c-75ea3917f0cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:39.266891 kubelet[2825]: E0114 05:47:39.266638 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:47:40.104018 kubelet[2825]: E0114 05:47:40.103789 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:47:40.803675 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 05:47:40.803814 kernel: audit: type=1130 audit(1768369660.783:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.28:22-10.0.0.1:57884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:40.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.28:22-10.0.0.1:57884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:40.784886 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:57884.service - OpenSSH per-connection server daemon (10.0.0.1:57884). Jan 14 05:47:40.948359 kernel: audit: type=1101 audit(1768369660.913:833): pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:40.913000 audit[6307]: USER_ACCT pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:40.950134 sshd[6307]: Accepted publickey for core from 10.0.0.1 port 57884 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:40.955952 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:40.951000 audit[6307]: CRED_ACQ pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:40.976713 systemd-logind[1596]: New session 20 of user core. Jan 14 05:47:41.008674 kernel: audit: type=1103 audit(1768369660.951:834): pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.008758 kernel: audit: type=1006 audit(1768369660.951:835): pid=6307 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 05:47:41.008786 kernel: audit: type=1300 audit(1768369660.951:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39ec5240 a2=3 a3=0 items=0 ppid=1 pid=6307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:40.951000 audit[6307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39ec5240 a2=3 a3=0 items=0 ppid=1 pid=6307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:41.040386 kernel: audit: type=1327 audit(1768369660.951:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:40.951000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:41.059111 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 05:47:41.066000 audit[6307]: USER_START pid=6307 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.113542 kernel: audit: type=1105 audit(1768369661.066:836): pid=6307 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.071000 audit[6311]: CRED_ACQ pid=6311 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.127650 kubelet[2825]: E0114 05:47:41.115609 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:47:41.148559 kernel: audit: type=1103 audit(1768369661.071:837): pid=6311 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.363088 sshd[6311]: Connection closed by 10.0.0.1 port 57884 Jan 14 05:47:41.365908 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:41.367000 audit[6307]: USER_END pid=6307 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.374590 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. Jan 14 05:47:41.375582 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:57884.service: Deactivated successfully. Jan 14 05:47:41.379078 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 05:47:41.385982 systemd-logind[1596]: Removed session 20. Jan 14 05:47:41.367000 audit[6307]: CRED_DISP pid=6307 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.444295 kernel: audit: type=1106 audit(1768369661.367:838): pid=6307 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.444408 kernel: audit: type=1104 audit(1768369661.367:839): pid=6307 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:41.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.28:22-10.0.0.1:57884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:42.133537 kubelet[2825]: E0114 05:47:42.132596 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:47:44.105110 kubelet[2825]: E0114 05:47:44.104917 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:47:46.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.28:22-10.0.0.1:45950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:46.382755 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950). Jan 14 05:47:46.390886 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:46.390917 kernel: audit: type=1130 audit(1768369666.381:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.28:22-10.0.0.1:45950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:46.541000 audit[6327]: USER_ACCT pid=6327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.546125 sshd[6327]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:46.548872 sshd-session[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:46.566111 systemd-logind[1596]: New session 21 of user core. Jan 14 05:47:46.545000 audit[6327]: CRED_ACQ pid=6327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.600818 kernel: audit: type=1101 audit(1768369666.541:842): pid=6327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.601700 kernel: audit: type=1103 audit(1768369666.545:843): pid=6327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.601753 kernel: audit: type=1006 audit(1768369666.545:844): pid=6327 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 05:47:46.545000 audit[6327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2c1c2d90 a2=3 a3=0 items=0 ppid=1 pid=6327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:46.652580 kernel: audit: type=1300 audit(1768369666.545:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2c1c2d90 a2=3 a3=0 items=0 ppid=1 pid=6327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:46.545000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:46.653429 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 05:47:46.666621 kernel: audit: type=1327 audit(1768369666.545:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:46.666000 audit[6327]: USER_START pid=6327 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.707017 kernel: audit: type=1105 audit(1768369666.666:845): pid=6327 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.673000 audit[6331]: CRED_ACQ pid=6331 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.739975 kernel: audit: type=1103 audit(1768369666.673:846): pid=6331 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.924699 sshd[6331]: Connection closed by 10.0.0.1 port 45950 Jan 14 05:47:46.926678 sshd-session[6327]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:46.927000 audit[6327]: USER_END pid=6327 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.934907 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:45950.service: Deactivated successfully. Jan 14 05:47:46.940630 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 05:47:46.945879 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. Jan 14 05:47:46.950018 systemd-logind[1596]: Removed session 21. Jan 14 05:47:46.976940 kernel: audit: type=1106 audit(1768369666.927:847): pid=6327 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.977102 kernel: audit: type=1104 audit(1768369666.928:848): pid=6327 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.928000 audit[6327]: CRED_DISP pid=6327 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:46.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.28:22-10.0.0.1:45950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:50.105617 containerd[1620]: time="2026-01-14T05:47:50.104924263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:47:50.176983 containerd[1620]: time="2026-01-14T05:47:50.176938209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:50.184594 containerd[1620]: time="2026-01-14T05:47:50.184087278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:47:50.184594 containerd[1620]: time="2026-01-14T05:47:50.184506528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:50.185467 kubelet[2825]: E0114 05:47:50.185039 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:47:50.185467 kubelet[2825]: E0114 05:47:50.185081 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:47:50.186835 kubelet[2825]: E0114 05:47:50.185524 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7d668c555c-qwjx8_calico-apiserver(e1f153ba-430a-43e5-84a9-e29936603f76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:50.186835 kubelet[2825]: E0114 05:47:50.186692 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:47:51.107676 kubelet[2825]: E0114 05:47:51.107607 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:47:51.113990 containerd[1620]: time="2026-01-14T05:47:51.112785524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 05:47:51.202141 containerd[1620]: time="2026-01-14T05:47:51.202015370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 05:47:51.205827 containerd[1620]: time="2026-01-14T05:47:51.205110701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 05:47:51.205827 containerd[1620]: time="2026-01-14T05:47:51.205717296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 05:47:51.206647 kubelet[2825]: E0114 05:47:51.206590 2825 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:47:51.207506 kubelet[2825]: E0114 05:47:51.207027 2825 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 05:47:51.207506 kubelet[2825]: E0114 05:47:51.207125 2825 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c8b67549f-bpw89_calico-apiserver(5832da08-4ce6-484b-b421-5f73ad1ce8d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 05:47:51.208764 kubelet[2825]: E0114 05:47:51.208740 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:47:51.948794 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:45964.service - OpenSSH per-connection server daemon (10.0.0.1:45964). Jan 14 05:47:51.984929 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:51.985067 kernel: audit: type=1130 audit(1768369671.947:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.28:22-10.0.0.1:45964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:51.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.28:22-10.0.0.1:45964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:52.108143 kubelet[2825]: E0114 05:47:52.107089 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:47:52.168000 audit[6345]: USER_ACCT pid=6345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.195561 sshd[6345]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:52.198501 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:52.210645 kernel: audit: type=1101 audit(1768369672.168:851): pid=6345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.193000 audit[6345]: CRED_ACQ pid=6345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.221502 systemd-logind[1596]: New session 22 of user core. Jan 14 05:47:52.274089 kernel: audit: type=1103 audit(1768369672.193:852): pid=6345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.274581 kernel: audit: type=1006 audit(1768369672.193:853): pid=6345 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 05:47:52.274650 kernel: audit: type=1300 audit(1768369672.193:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13d83f80 a2=3 a3=0 items=0 ppid=1 pid=6345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:52.193000 audit[6345]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13d83f80 a2=3 a3=0 items=0 ppid=1 pid=6345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:52.309736 kernel: audit: type=1327 audit(1768369672.193:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:52.193000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:52.323792 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 05:47:52.331000 audit[6345]: USER_START pid=6345 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.371458 kernel: audit: type=1105 audit(1768369672.331:854): pid=6345 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.371718 kernel: audit: type=1103 audit(1768369672.337:855): pid=6349 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.337000 audit[6349]: CRED_ACQ pid=6349 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.591518 sshd[6349]: Connection closed by 10.0.0.1 port 45964 Jan 14 05:47:52.591088 sshd-session[6345]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:52.593000 audit[6345]: USER_END pid=6345 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.599088 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. Jan 14 05:47:52.601832 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:45964.service: Deactivated successfully. Jan 14 05:47:52.605495 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 05:47:52.608579 systemd-logind[1596]: Removed session 22. Jan 14 05:47:52.593000 audit[6345]: CRED_DISP pid=6345 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.672974 kernel: audit: type=1106 audit(1768369672.593:856): pid=6345 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.673088 kernel: audit: type=1104 audit(1768369672.593:857): pid=6345 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:52.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.28:22-10.0.0.1:45964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:53.105380 kubelet[2825]: E0114 05:47:53.105140 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:47:53.107827 kubelet[2825]: E0114 05:47:53.107732 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:47:56.113363 kubelet[2825]: E0114 05:47:56.111920 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:47:57.609706 systemd[1]: Started sshd@21-10.0.0.28:22-10.0.0.1:52238.service - OpenSSH per-connection server daemon (10.0.0.1:52238). Jan 14 05:47:57.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.28:22-10.0.0.1:52238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:57.620046 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:47:57.620506 kernel: audit: type=1130 audit(1768369677.609:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.28:22-10.0.0.1:52238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:57.765000 audit[6362]: USER_ACCT pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:57.767692 sshd[6362]: Accepted publickey for core from 10.0.0.1 port 52238 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:47:57.770806 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:47:57.781620 systemd-logind[1596]: New session 23 of user core. Jan 14 05:47:57.768000 audit[6362]: CRED_ACQ pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:57.835366 kernel: audit: type=1101 audit(1768369677.765:860): pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:57.835672 kernel: audit: type=1103 audit(1768369677.768:861): pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:57.835724 kernel: audit: type=1006 audit(1768369677.768:862): pid=6362 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 05:47:57.768000 audit[6362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff631b8860 a2=3 a3=0 items=0 ppid=1 pid=6362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:57.894910 kernel: audit: type=1300 audit(1768369677.768:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff631b8860 a2=3 a3=0 items=0 ppid=1 pid=6362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:47:57.895536 kernel: audit: type=1327 audit(1768369677.768:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:57.768000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:47:57.910866 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 05:47:57.917000 audit[6362]: USER_START pid=6362 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:57.969023 kernel: audit: type=1105 audit(1768369677.917:863): pid=6362 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:57.923000 audit[6366]: CRED_ACQ pid=6366 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:58.007622 kernel: audit: type=1103 audit(1768369677.923:864): pid=6366 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:58.192687 sshd[6366]: Connection closed by 10.0.0.1 port 52238 Jan 14 05:47:58.194867 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Jan 14 05:47:58.202000 audit[6362]: USER_END pid=6362 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:58.224000 audit[6362]: CRED_DISP pid=6362 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:58.255034 systemd[1]: sshd@21-10.0.0.28:22-10.0.0.1:52238.service: Deactivated successfully. Jan 14 05:47:58.260773 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 05:47:58.268659 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. Jan 14 05:47:58.273056 systemd-logind[1596]: Removed session 23. Jan 14 05:47:58.287873 kernel: audit: type=1106 audit(1768369678.202:865): pid=6362 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:58.287944 kernel: audit: type=1104 audit(1768369678.224:866): pid=6362 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:47:58.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.28:22-10.0.0.1:52238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:47:59.101361 kubelet[2825]: E0114 05:47:59.099792 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:48:03.108363 kubelet[2825]: E0114 05:48:03.106087 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:48:03.108363 kubelet[2825]: E0114 05:48:03.107527 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:48:03.108363 kubelet[2825]: E0114 05:48:03.107602 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:48:03.212119 systemd[1]: Started sshd@22-10.0.0.28:22-10.0.0.1:52252.service - OpenSSH per-connection server daemon (10.0.0.1:52252). Jan 14 05:48:03.228684 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:03.228820 kernel: audit: type=1130 audit(1768369683.213:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.28:22-10.0.0.1:52252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:03.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.28:22-10.0.0.1:52252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:03.417000 audit[6407]: USER_ACCT pid=6407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.420697 sshd[6407]: Accepted publickey for core from 10.0.0.1 port 52252 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:03.425589 sshd-session[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:03.450811 systemd-logind[1596]: New session 24 of user core. Jan 14 05:48:03.459744 kernel: audit: type=1101 audit(1768369683.417:869): pid=6407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.459846 kernel: audit: type=1103 audit(1768369683.422:870): pid=6407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.422000 audit[6407]: CRED_ACQ pid=6407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.513471 kernel: audit: type=1006 audit(1768369683.422:871): pid=6407 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 05:48:03.422000 audit[6407]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4886a6e0 a2=3 a3=0 items=0 ppid=1 pid=6407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:03.548365 kernel: audit: type=1300 audit(1768369683.422:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4886a6e0 a2=3 a3=0 items=0 ppid=1 pid=6407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:03.549692 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 05:48:03.422000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:03.571507 kernel: audit: type=1327 audit(1768369683.422:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:03.571637 kernel: audit: type=1105 audit(1768369683.564:872): pid=6407 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.564000 audit[6407]: USER_START pid=6407 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.576000 audit[6411]: CRED_ACQ pid=6411 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.651402 kernel: audit: type=1103 audit(1768369683.576:873): pid=6411 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.882586 sshd[6411]: Connection closed by 10.0.0.1 port 52252 Jan 14 05:48:03.884794 sshd-session[6407]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:03.886000 audit[6407]: USER_END pid=6407 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.902553 systemd[1]: sshd@22-10.0.0.28:22-10.0.0.1:52252.service: Deactivated successfully. Jan 14 05:48:03.907728 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 05:48:03.910824 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. Jan 14 05:48:03.915742 systemd-logind[1596]: Removed session 24. Jan 14 05:48:03.886000 audit[6407]: CRED_DISP pid=6407 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.953852 kernel: audit: type=1106 audit(1768369683.886:874): pid=6407 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.953960 kernel: audit: type=1104 audit(1768369683.886:875): pid=6407 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:03.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.28:22-10.0.0.1:52252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:04.109155 kubelet[2825]: E0114 05:48:04.108562 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:48:05.106344 kubelet[2825]: E0114 05:48:05.105679 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:48:08.104644 kubelet[2825]: E0114 05:48:08.104423 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:48:08.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.28:22-10.0.0.1:58160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:08.904889 systemd[1]: Started sshd@23-10.0.0.28:22-10.0.0.1:58160.service - OpenSSH per-connection server daemon (10.0.0.1:58160). Jan 14 05:48:08.912021 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:08.912064 kernel: audit: type=1130 audit(1768369688.904:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.28:22-10.0.0.1:58160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:09.015000 audit[6430]: USER_ACCT pid=6430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.016549 sshd[6430]: Accepted publickey for core from 10.0.0.1 port 58160 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:09.019529 sshd-session[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:09.028809 systemd-logind[1596]: New session 25 of user core. Jan 14 05:48:09.017000 audit[6430]: CRED_ACQ pid=6430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.071550 kernel: audit: type=1101 audit(1768369689.015:878): pid=6430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.071649 kernel: audit: type=1103 audit(1768369689.017:879): pid=6430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.071677 kernel: audit: type=1006 audit(1768369689.017:880): pid=6430 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 05:48:09.017000 audit[6430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7e66c2a0 a2=3 a3=0 items=0 ppid=1 pid=6430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:09.111333 kubelet[2825]: E0114 05:48:09.110493 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:48:09.117097 kernel: audit: type=1300 audit(1768369689.017:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7e66c2a0 a2=3 a3=0 items=0 ppid=1 pid=6430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:09.017000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:09.124712 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 05:48:09.131540 kernel: audit: type=1327 audit(1768369689.017:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:09.135000 audit[6430]: USER_START pid=6430 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.182730 kernel: audit: type=1105 audit(1768369689.135:881): pid=6430 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.140000 audit[6434]: CRED_ACQ pid=6434 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.220589 kernel: audit: type=1103 audit(1768369689.140:882): pid=6434 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.389451 sshd[6434]: Connection closed by 10.0.0.1 port 58160 Jan 14 05:48:09.390808 sshd-session[6430]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:09.396000 audit[6430]: USER_END pid=6430 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.400541 systemd[1]: sshd@23-10.0.0.28:22-10.0.0.1:58160.service: Deactivated successfully. Jan 14 05:48:09.407790 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 05:48:09.412544 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. Jan 14 05:48:09.414594 systemd-logind[1596]: Removed session 25. Jan 14 05:48:09.396000 audit[6430]: CRED_DISP pid=6430 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.466340 kernel: audit: type=1106 audit(1768369689.396:883): pid=6430 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.466592 kernel: audit: type=1104 audit(1768369689.396:884): pid=6430 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:09.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.28:22-10.0.0.1:58160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:12.115319 kubelet[2825]: E0114 05:48:12.114857 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:48:14.106658 kubelet[2825]: E0114 05:48:14.106142 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:48:14.410665 systemd[1]: Started sshd@24-10.0.0.28:22-10.0.0.1:57886.service - OpenSSH per-connection server daemon (10.0.0.1:57886). Jan 14 05:48:14.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.28:22-10.0.0.1:57886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:14.418036 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:14.418335 kernel: audit: type=1130 audit(1768369694.409:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.28:22-10.0.0.1:57886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:14.539000 audit[6448]: USER_ACCT pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.541495 sshd[6448]: Accepted publickey for core from 10.0.0.1 port 57886 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:14.547794 sshd-session[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:14.565606 systemd-logind[1596]: New session 26 of user core. Jan 14 05:48:14.542000 audit[6448]: CRED_ACQ pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.599487 kernel: audit: type=1101 audit(1768369694.539:887): pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.600008 kernel: audit: type=1103 audit(1768369694.542:888): pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.600053 kernel: audit: type=1006 audit(1768369694.542:889): pid=6448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 05:48:14.542000 audit[6448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff367ba400 a2=3 a3=0 items=0 ppid=1 pid=6448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:14.648642 kernel: audit: type=1300 audit(1768369694.542:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff367ba400 a2=3 a3=0 items=0 ppid=1 pid=6448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:14.542000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:14.651428 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 05:48:14.660682 kernel: audit: type=1327 audit(1768369694.542:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:14.664000 audit[6448]: USER_START pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.703827 kernel: audit: type=1105 audit(1768369694.664:890): pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.704138 kernel: audit: type=1103 audit(1768369694.670:891): pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.670000 audit[6454]: CRED_ACQ pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.927577 sshd[6454]: Connection closed by 10.0.0.1 port 57886 Jan 14 05:48:14.928323 sshd-session[6448]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:14.930000 audit[6448]: USER_END pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.936016 systemd[1]: sshd@24-10.0.0.28:22-10.0.0.1:57886.service: Deactivated successfully. Jan 14 05:48:14.940450 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 05:48:14.946712 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit. Jan 14 05:48:14.949570 systemd-logind[1596]: Removed session 26. Jan 14 05:48:14.970257 kernel: audit: type=1106 audit(1768369694.930:892): pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.930000 audit[6448]: CRED_DISP pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:15.002130 kernel: audit: type=1104 audit(1768369694.930:893): pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:14.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.28:22-10.0.0.1:57886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:17.102247 kubelet[2825]: E0114 05:48:17.101595 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:48:18.109205 kubelet[2825]: E0114 05:48:18.108982 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:48:18.109655 kubelet[2825]: E0114 05:48:18.109487 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:48:18.112201 kubelet[2825]: E0114 05:48:18.111313 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:48:19.947230 systemd[1]: Started sshd@25-10.0.0.28:22-10.0.0.1:57902.service - OpenSSH per-connection server daemon (10.0.0.1:57902). Jan 14 05:48:19.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.28:22-10.0.0.1:57902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:19.949489 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:19.949605 kernel: audit: type=1130 audit(1768369699.946:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.28:22-10.0.0.1:57902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:20.016000 audit[6471]: USER_ACCT pid=6471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.017535 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 57902 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:20.019673 sshd-session[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:20.016000 audit[6471]: CRED_ACQ pid=6471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.025311 systemd-logind[1596]: New session 27 of user core. Jan 14 05:48:20.032672 kernel: audit: type=1101 audit(1768369700.016:896): pid=6471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.032729 kernel: audit: type=1103 audit(1768369700.016:897): pid=6471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.032751 kernel: audit: type=1006 audit(1768369700.016:898): pid=6471 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 05:48:20.016000 audit[6471]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffc6fd360 a2=3 a3=0 items=0 ppid=1 pid=6471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:20.046231 kernel: audit: type=1300 audit(1768369700.016:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffc6fd360 a2=3 a3=0 items=0 ppid=1 pid=6471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:20.046295 kernel: audit: type=1327 audit(1768369700.016:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:20.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:20.051474 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 05:48:20.053000 audit[6471]: USER_START pid=6471 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.055000 audit[6475]: CRED_ACQ pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.077211 kernel: audit: type=1105 audit(1768369700.053:899): pid=6471 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.078142 kernel: audit: type=1103 audit(1768369700.055:900): pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.142865 sshd[6475]: Connection closed by 10.0.0.1 port 57902 Jan 14 05:48:20.144992 sshd-session[6471]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:20.145000 audit[6471]: USER_END pid=6471 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.150295 systemd[1]: sshd@25-10.0.0.28:22-10.0.0.1:57902.service: Deactivated successfully. Jan 14 05:48:20.152964 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 05:48:20.154131 systemd-logind[1596]: Session 27 logged out. Waiting for processes to exit. Jan 14 05:48:20.145000 audit[6471]: CRED_DISP pid=6471 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.155941 systemd-logind[1596]: Removed session 27. Jan 14 05:48:20.162745 kernel: audit: type=1106 audit(1768369700.145:901): pid=6471 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.162859 kernel: audit: type=1104 audit(1768369700.145:902): pid=6471 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:20.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.28:22-10.0.0.1:57902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:23.102454 kubelet[2825]: E0114 05:48:23.101904 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:48:24.107394 kubelet[2825]: E0114 05:48:24.107271 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:48:25.101862 kubelet[2825]: E0114 05:48:25.101785 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:48:25.158482 systemd[1]: Started sshd@26-10.0.0.28:22-10.0.0.1:46330.service - OpenSSH per-connection server daemon (10.0.0.1:46330). Jan 14 05:48:25.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.28:22-10.0.0.1:46330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:25.161739 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:25.161808 kernel: audit: type=1130 audit(1768369705.157:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.28:22-10.0.0.1:46330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:25.236000 audit[6488]: USER_ACCT pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.237698 sshd[6488]: Accepted publickey for core from 10.0.0.1 port 46330 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:25.240553 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:25.237000 audit[6488]: CRED_ACQ pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.250584 systemd-logind[1596]: New session 28 of user core. Jan 14 05:48:25.257308 kernel: audit: type=1101 audit(1768369705.236:905): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.257422 kernel: audit: type=1103 audit(1768369705.237:906): pid=6488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.263498 kernel: audit: type=1006 audit(1768369705.237:907): pid=6488 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 05:48:25.237000 audit[6488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe7712c70 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:25.274456 kernel: audit: type=1300 audit(1768369705.237:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe7712c70 a2=3 a3=0 items=0 ppid=1 pid=6488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:25.237000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:25.276485 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 05:48:25.279203 kernel: audit: type=1327 audit(1768369705.237:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:25.283000 audit[6488]: USER_START pid=6488 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.298034 kernel: audit: type=1105 audit(1768369705.283:908): pid=6488 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.297000 audit[6492]: CRED_ACQ pid=6492 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.308202 kernel: audit: type=1103 audit(1768369705.297:909): pid=6492 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.400482 sshd[6492]: Connection closed by 10.0.0.1 port 46330 Jan 14 05:48:25.400898 sshd-session[6488]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:25.402000 audit[6488]: USER_END pid=6488 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.407332 systemd[1]: sshd@26-10.0.0.28:22-10.0.0.1:46330.service: Deactivated successfully. Jan 14 05:48:25.411883 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 05:48:25.413258 systemd-logind[1596]: Session 28 logged out. Waiting for processes to exit. Jan 14 05:48:25.415753 systemd-logind[1596]: Removed session 28. Jan 14 05:48:25.402000 audit[6488]: CRED_DISP pid=6488 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.426359 kernel: audit: type=1106 audit(1768369705.402:910): pid=6488 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.426445 kernel: audit: type=1104 audit(1768369705.402:911): pid=6488 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:25.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.28:22-10.0.0.1:46330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:29.101624 kubelet[2825]: E0114 05:48:29.101470 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:48:29.101624 kubelet[2825]: E0114 05:48:29.101486 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:48:30.107104 kubelet[2825]: E0114 05:48:30.107062 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:48:30.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.28:22-10.0.0.1:46346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:30.414941 systemd[1]: Started sshd@27-10.0.0.28:22-10.0.0.1:46346.service - OpenSSH per-connection server daemon (10.0.0.1:46346). Jan 14 05:48:30.417537 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:30.417601 kernel: audit: type=1130 audit(1768369710.413:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.28:22-10.0.0.1:46346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:30.476000 audit[6533]: USER_ACCT pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.478105 sshd[6533]: Accepted publickey for core from 10.0.0.1 port 46346 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:30.481291 sshd-session[6533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:30.478000 audit[6533]: CRED_ACQ pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.492740 systemd-logind[1596]: New session 29 of user core. Jan 14 05:48:30.503029 kernel: audit: type=1101 audit(1768369710.476:914): pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.503370 kernel: audit: type=1103 audit(1768369710.478:915): pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.503525 kernel: audit: type=1006 audit(1768369710.478:916): pid=6533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 05:48:30.478000 audit[6533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc08927ef0 a2=3 a3=0 items=0 ppid=1 pid=6533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:30.523271 kernel: audit: type=1300 audit(1768369710.478:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc08927ef0 a2=3 a3=0 items=0 ppid=1 pid=6533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:30.523379 kernel: audit: type=1327 audit(1768369710.478:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:30.478000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:30.524594 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 05:48:30.531000 audit[6533]: USER_START pid=6533 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.534000 audit[6537]: CRED_ACQ pid=6537 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.555528 kernel: audit: type=1105 audit(1768369710.531:917): pid=6533 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.555609 kernel: audit: type=1103 audit(1768369710.534:918): pid=6537 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.631706 sshd[6537]: Connection closed by 10.0.0.1 port 46346 Jan 14 05:48:30.633521 sshd-session[6533]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:30.634000 audit[6533]: USER_END pid=6533 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.634000 audit[6533]: CRED_DISP pid=6533 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.651925 kernel: audit: type=1106 audit(1768369710.634:919): pid=6533 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.651993 kernel: audit: type=1104 audit(1768369710.634:920): pid=6533 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.658523 systemd[1]: sshd@27-10.0.0.28:22-10.0.0.1:46346.service: Deactivated successfully. Jan 14 05:48:30.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.28:22-10.0.0.1:46346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:30.661452 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 05:48:30.662823 systemd-logind[1596]: Session 29 logged out. Waiting for processes to exit. Jan 14 05:48:30.666762 systemd[1]: Started sshd@28-10.0.0.28:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Jan 14 05:48:30.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.28:22-10.0.0.1:46358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:30.668227 systemd-logind[1596]: Removed session 29. Jan 14 05:48:30.731000 audit[6551]: USER_ACCT pid=6551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.732995 sshd[6551]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:30.733000 audit[6551]: CRED_ACQ pid=6551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.733000 audit[6551]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda5945780 a2=3 a3=0 items=0 ppid=1 pid=6551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:30.733000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:30.735568 sshd-session[6551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:30.741735 systemd-logind[1596]: New session 30 of user core. Jan 14 05:48:30.746379 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 05:48:30.748000 audit[6551]: USER_START pid=6551 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:30.751000 audit[6555]: CRED_ACQ pid=6555 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.030526 sshd[6555]: Connection closed by 10.0.0.1 port 46358 Jan 14 05:48:31.030910 sshd-session[6551]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:31.032000 audit[6551]: USER_END pid=6551 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.032000 audit[6551]: CRED_DISP pid=6551 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.037879 systemd[1]: sshd@28-10.0.0.28:22-10.0.0.1:46358.service: Deactivated successfully. Jan 14 05:48:31.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.28:22-10.0.0.1:46358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:31.040404 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 05:48:31.043620 systemd-logind[1596]: Session 30 logged out. Waiting for processes to exit. Jan 14 05:48:31.046879 systemd[1]: Started sshd@29-10.0.0.28:22-10.0.0.1:46360.service - OpenSSH per-connection server daemon (10.0.0.1:46360). Jan 14 05:48:31.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.28:22-10.0.0.1:46360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:31.049082 systemd-logind[1596]: Removed session 30. Jan 14 05:48:31.125000 audit[6567]: USER_ACCT pid=6567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.127384 sshd[6567]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:31.127000 audit[6567]: CRED_ACQ pid=6567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.127000 audit[6567]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdaae245b0 a2=3 a3=0 items=0 ppid=1 pid=6567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:31.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:31.130311 sshd-session[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:31.136617 systemd-logind[1596]: New session 31 of user core. Jan 14 05:48:31.145441 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 05:48:31.147000 audit[6567]: USER_START pid=6567 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.149000 audit[6572]: CRED_ACQ pid=6572 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.752875 sshd[6572]: Connection closed by 10.0.0.1 port 46360 Jan 14 05:48:31.753395 sshd-session[6567]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:31.754000 audit[6567]: USER_END pid=6567 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.754000 audit[6567]: CRED_DISP pid=6567 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.767389 systemd[1]: sshd@29-10.0.0.28:22-10.0.0.1:46360.service: Deactivated successfully. Jan 14 05:48:31.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.28:22-10.0.0.1:46360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:31.774046 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 05:48:31.778865 systemd-logind[1596]: Session 31 logged out. Waiting for processes to exit. Jan 14 05:48:31.788597 systemd[1]: Started sshd@30-10.0.0.28:22-10.0.0.1:46366.service - OpenSSH per-connection server daemon (10.0.0.1:46366). Jan 14 05:48:31.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.28:22-10.0.0.1:46366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:31.792486 systemd-logind[1596]: Removed session 31. Jan 14 05:48:31.834000 audit[6591]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=6591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:31.834000 audit[6591]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcc6891290 a2=0 a3=7ffcc689127c items=0 ppid=2986 pid=6591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:31.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:31.841000 audit[6591]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=6591 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:31.841000 audit[6591]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcc6891290 a2=0 a3=0 items=0 ppid=2986 pid=6591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:31.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:31.875000 audit[6589]: USER_ACCT pid=6589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.877518 sshd[6589]: Accepted publickey for core from 10.0.0.1 port 46366 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:31.877000 audit[6589]: CRED_ACQ pid=6589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.877000 audit[6589]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe88b9400 a2=3 a3=0 items=0 ppid=1 pid=6589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:31.877000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:31.880237 sshd-session[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:31.888652 systemd-logind[1596]: New session 32 of user core. Jan 14 05:48:31.893368 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 05:48:31.896000 audit[6589]: USER_START pid=6589 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:31.898000 audit[6594]: CRED_ACQ pid=6594 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.105419 kubelet[2825]: E0114 05:48:32.104897 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:48:32.148535 sshd[6594]: Connection closed by 10.0.0.1 port 46366 Jan 14 05:48:32.149920 sshd-session[6589]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:32.154000 audit[6589]: USER_END pid=6589 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.154000 audit[6589]: CRED_DISP pid=6589 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.161719 systemd[1]: sshd@30-10.0.0.28:22-10.0.0.1:46366.service: Deactivated successfully. Jan 14 05:48:32.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.28:22-10.0.0.1:46366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:32.166566 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 05:48:32.170712 systemd-logind[1596]: Session 32 logged out. Waiting for processes to exit. Jan 14 05:48:32.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.28:22-10.0.0.1:46376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:32.175642 systemd[1]: Started sshd@31-10.0.0.28:22-10.0.0.1:46376.service - OpenSSH per-connection server daemon (10.0.0.1:46376). Jan 14 05:48:32.181730 systemd-logind[1596]: Removed session 32. Jan 14 05:48:32.257000 audit[6606]: USER_ACCT pid=6606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.258524 sshd[6606]: Accepted publickey for core from 10.0.0.1 port 46376 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:32.258000 audit[6606]: CRED_ACQ pid=6606 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.258000 audit[6606]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3b78f80 a2=3 a3=0 items=0 ppid=1 pid=6606 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:32.258000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:32.262440 sshd-session[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:32.270249 systemd-logind[1596]: New session 33 of user core. Jan 14 05:48:32.282393 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 14 05:48:32.287000 audit[6606]: USER_START pid=6606 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.289000 audit[6610]: CRED_ACQ pid=6610 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.372737 sshd[6610]: Connection closed by 10.0.0.1 port 46376 Jan 14 05:48:32.373075 sshd-session[6606]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:32.373000 audit[6606]: USER_END pid=6606 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.373000 audit[6606]: CRED_DISP pid=6606 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:32.378446 systemd[1]: sshd@31-10.0.0.28:22-10.0.0.1:46376.service: Deactivated successfully. Jan 14 05:48:32.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.28:22-10.0.0.1:46376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:32.381455 systemd[1]: session-33.scope: Deactivated successfully. Jan 14 05:48:32.383000 systemd-logind[1596]: Session 33 logged out. Waiting for processes to exit. Jan 14 05:48:32.384629 systemd-logind[1596]: Removed session 33. Jan 14 05:48:32.861000 audit[6623]: NETFILTER_CFG table=filter:136 family=2 entries=38 op=nft_register_rule pid=6623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:32.861000 audit[6623]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd1ec86740 a2=0 a3=7ffd1ec8672c items=0 ppid=2986 pid=6623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:32.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:32.870000 audit[6623]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=6623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:32.870000 audit[6623]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd1ec86740 a2=0 a3=0 items=0 ppid=2986 pid=6623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:32.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:34.101126 kubelet[2825]: E0114 05:48:34.101051 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:48:36.102057 kubelet[2825]: E0114 05:48:36.101949 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:48:36.104140 kubelet[2825]: E0114 05:48:36.102110 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-46w2k" podUID="2b560ec8-f090-4614-a1d5-13a4bc0ce8dc" Jan 14 05:48:36.931000 audit[6627]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=6627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:36.934909 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 05:48:36.935262 kernel: audit: type=1325 audit(1768369716.931:962): table=filter:138 family=2 entries=26 op=nft_register_rule pid=6627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:36.931000 audit[6627]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffed3432f0 a2=0 a3=7fffed3432dc items=0 ppid=2986 pid=6627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:36.951880 kernel: audit: type=1300 audit(1768369716.931:962): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffed3432f0 a2=0 a3=7fffed3432dc items=0 ppid=2986 pid=6627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:36.951949 kernel: audit: type=1327 audit(1768369716.931:962): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:36.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:36.961000 audit[6627]: NETFILTER_CFG table=nat:139 family=2 entries=104 op=nft_register_chain pid=6627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:36.961000 audit[6627]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffed3432f0 a2=0 a3=7fffed3432dc items=0 ppid=2986 pid=6627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:36.978887 kernel: audit: type=1325 audit(1768369716.961:963): table=nat:139 family=2 entries=104 op=nft_register_chain pid=6627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 05:48:36.978956 kernel: audit: type=1300 audit(1768369716.961:963): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffed3432f0 a2=0 a3=7fffed3432dc items=0 ppid=2986 pid=6627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:36.979000 kernel: audit: type=1327 audit(1768369716.961:963): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:36.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 05:48:37.100798 kubelet[2825]: E0114 05:48:37.100686 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-nrl4m" podUID="f8e3b291-7413-4398-b3ac-57e03796db9f" Jan 14 05:48:37.394450 systemd[1]: Started sshd@32-10.0.0.28:22-10.0.0.1:41492.service - OpenSSH per-connection server daemon (10.0.0.1:41492). Jan 14 05:48:37.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.28:22-10.0.0.1:41492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:37.402357 kernel: audit: type=1130 audit(1768369717.393:964): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.28:22-10.0.0.1:41492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:37.452000 audit[6629]: USER_ACCT pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.454008 sshd[6629]: Accepted publickey for core from 10.0.0.1 port 41492 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:37.456326 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:37.461922 systemd-logind[1596]: New session 34 of user core. Jan 14 05:48:37.453000 audit[6629]: CRED_ACQ pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.470772 kernel: audit: type=1101 audit(1768369717.452:965): pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.470842 kernel: audit: type=1103 audit(1768369717.453:966): pid=6629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.470872 kernel: audit: type=1006 audit(1768369717.453:967): pid=6629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 14 05:48:37.453000 audit[6629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc00c767b0 a2=3 a3=0 items=0 ppid=1 pid=6629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:37.453000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:37.481409 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 14 05:48:37.483000 audit[6629]: USER_START pid=6629 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.485000 audit[6635]: CRED_ACQ pid=6635 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.564965 sshd[6635]: Connection closed by 10.0.0.1 port 41492 Jan 14 05:48:37.565332 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:37.565000 audit[6629]: USER_END pid=6629 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.566000 audit[6629]: CRED_DISP pid=6629 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:37.570659 systemd[1]: sshd@32-10.0.0.28:22-10.0.0.1:41492.service: Deactivated successfully. Jan 14 05:48:37.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.28:22-10.0.0.1:41492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:37.573011 systemd[1]: session-34.scope: Deactivated successfully. Jan 14 05:48:37.574094 systemd-logind[1596]: Session 34 logged out. Waiting for processes to exit. Jan 14 05:48:37.575426 systemd-logind[1596]: Removed session 34. Jan 14 05:48:41.102250 kubelet[2825]: E0114 05:48:41.101343 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:48:41.103480 kubelet[2825]: E0114 05:48:41.103414 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h4bdc" podUID="1cbfb118-b594-42d6-be3d-0e1840e8dae4" Jan 14 05:48:42.102009 kubelet[2825]: E0114 05:48:42.101950 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d668c555c-qwjx8" podUID="e1f153ba-430a-43e5-84a9-e29936603f76" Jan 14 05:48:42.583281 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 05:48:42.583384 kernel: audit: type=1130 audit(1768369722.578:973): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.28:22-10.0.0.1:41508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:42.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.28:22-10.0.0.1:41508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:42.579854 systemd[1]: Started sshd@33-10.0.0.28:22-10.0.0.1:41508.service - OpenSSH per-connection server daemon (10.0.0.1:41508). Jan 14 05:48:42.647000 audit[6653]: USER_ACCT pid=6653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.649869 sshd[6653]: Accepted publickey for core from 10.0.0.1 port 41508 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:42.651408 sshd-session[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:42.660296 kernel: audit: type=1101 audit(1768369722.647:974): pid=6653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.648000 audit[6653]: CRED_ACQ pid=6653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.666751 systemd-logind[1596]: New session 35 of user core. Jan 14 05:48:42.674114 kernel: audit: type=1103 audit(1768369722.648:975): pid=6653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.674272 kernel: audit: type=1006 audit(1768369722.649:976): pid=6653 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 14 05:48:42.674310 kernel: audit: type=1300 audit(1768369722.649:976): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e1b85c0 a2=3 a3=0 items=0 ppid=1 pid=6653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:42.649000 audit[6653]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e1b85c0 a2=3 a3=0 items=0 ppid=1 pid=6653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:42.649000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:42.687080 kernel: audit: type=1327 audit(1768369722.649:976): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:42.688443 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 14 05:48:42.691000 audit[6653]: USER_START pid=6653 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.694000 audit[6658]: CRED_ACQ pid=6658 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.712508 kernel: audit: type=1105 audit(1768369722.691:977): pid=6653 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.712606 kernel: audit: type=1103 audit(1768369722.694:978): pid=6658 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.778489 sshd[6658]: Connection closed by 10.0.0.1 port 41508 Jan 14 05:48:42.779032 sshd-session[6653]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:42.780000 audit[6653]: USER_END pid=6653 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.784864 systemd[1]: sshd@33-10.0.0.28:22-10.0.0.1:41508.service: Deactivated successfully. Jan 14 05:48:42.787672 systemd[1]: session-35.scope: Deactivated successfully. Jan 14 05:48:42.788756 systemd-logind[1596]: Session 35 logged out. Waiting for processes to exit. Jan 14 05:48:42.790882 systemd-logind[1596]: Removed session 35. Jan 14 05:48:42.780000 audit[6653]: CRED_DISP pid=6653 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.799399 kernel: audit: type=1106 audit(1768369722.780:979): pid=6653 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.799465 kernel: audit: type=1104 audit(1768369722.780:980): pid=6653 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:42.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.28:22-10.0.0.1:41508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:44.101049 kubelet[2825]: E0114 05:48:44.100991 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:48:44.101480 kubelet[2825]: E0114 05:48:44.100986 2825 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 05:48:44.102549 kubelet[2825]: E0114 05:48:44.102421 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85449f874f-xn2d4" podUID="64a69192-713c-418d-907c-75ea3917f0cd" Jan 14 05:48:47.101483 kubelet[2825]: E0114 05:48:47.101229 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c8b67549f-bpw89" podUID="5832da08-4ce6-484b-b421-5f73ad1ce8d2" Jan 14 05:48:47.101483 kubelet[2825]: E0114 05:48:47.101399 2825 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f97fd46d-kvn2d" podUID="a883e1fb-a961-4974-a5a0-9481f730a55a" Jan 14 05:48:47.804248 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 05:48:47.804350 kernel: audit: type=1130 audit(1768369727.793:982): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.28:22-10.0.0.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:47.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.28:22-10.0.0.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 05:48:47.794734 systemd[1]: Started sshd@34-10.0.0.28:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). Jan 14 05:48:47.867000 audit[6674]: USER_ACCT pid=6674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.871015 sshd-session[6674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 05:48:47.877402 kernel: audit: type=1101 audit(1768369727.867:983): pid=6674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.877437 sshd[6674]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:Cly/YAk8sTFm16ELl1FPICIkCv25YSx9w3D4BITJvfg Jan 14 05:48:47.877041 systemd-logind[1596]: New session 36 of user core. Jan 14 05:48:47.868000 audit[6674]: CRED_ACQ pid=6674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.887230 kernel: audit: type=1103 audit(1768369727.868:984): pid=6674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.887336 kernel: audit: type=1006 audit(1768369727.868:985): pid=6674 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 14 05:48:47.868000 audit[6674]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef9718100 a2=3 a3=0 items=0 ppid=1 pid=6674 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:47.891413 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 14 05:48:47.900422 kernel: audit: type=1300 audit(1768369727.868:985): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef9718100 a2=3 a3=0 items=0 ppid=1 pid=6674 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 05:48:47.868000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:47.904249 kernel: audit: type=1327 audit(1768369727.868:985): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 05:48:47.897000 audit[6674]: USER_START pid=6674 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.914941 kernel: audit: type=1105 audit(1768369727.897:986): pid=6674 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.915005 kernel: audit: type=1103 audit(1768369727.900:987): pid=6678 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.900000 audit[6678]: CRED_ACQ pid=6678 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.986270 sshd[6678]: Connection closed by 10.0.0.1 port 51984 Jan 14 05:48:47.988727 sshd-session[6674]: pam_unix(sshd:session): session closed for user core Jan 14 05:48:47.989000 audit[6674]: USER_END pid=6674 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.993851 systemd[1]: sshd@34-10.0.0.28:22-10.0.0.1:51984.service: Deactivated successfully. Jan 14 05:48:47.996838 systemd[1]: session-36.scope: Deactivated successfully. Jan 14 05:48:47.997966 systemd-logind[1596]: Session 36 logged out. Waiting for processes to exit. Jan 14 05:48:47.999693 systemd-logind[1596]: Removed session 36. Jan 14 05:48:47.989000 audit[6674]: CRED_DISP pid=6674 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:48.008584 kernel: audit: type=1106 audit(1768369727.989:988): pid=6674 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:48.008641 kernel: audit: type=1104 audit(1768369727.989:989): pid=6674 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 05:48:47.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.28:22-10.0.0.1:51984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'