Jan 23 05:38:24.343064 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 03:18:47 -00 2026 Jan 23 05:38:24.343087 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a92e3d1fa43eb937dfa610fcbbc2e2f315bddbcd68fb450286e9840385c92d1 Jan 23 05:38:24.343099 kernel: BIOS-provided physical RAM map: Jan 23 05:38:24.343106 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 05:38:24.343112 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 05:38:24.343118 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 05:38:24.343124 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 05:38:24.343131 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 05:38:24.343136 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 05:38:24.343142 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 05:38:24.343151 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 05:38:24.343157 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 05:38:24.343163 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 05:38:24.343169 kernel: NX (Execute Disable) protection: active Jan 23 05:38:24.343176 kernel: APIC: Static calls initialized Jan 23 05:38:24.343184 kernel: SMBIOS 2.8 present. Jan 23 05:38:24.343191 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 05:38:24.343197 kernel: DMI: Memory slots populated: 1/1 Jan 23 05:38:24.343204 kernel: Hypervisor detected: KVM Jan 23 05:38:24.343210 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 05:38:24.343216 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 05:38:24.343223 kernel: kvm-clock: using sched offset of 4172790310 cycles Jan 23 05:38:24.343230 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 05:38:24.343237 kernel: tsc: Detected 2445.424 MHz processor Jan 23 05:38:24.343246 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 05:38:24.343253 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 05:38:24.343259 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 05:38:24.343266 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 05:38:24.343273 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 05:38:24.343280 kernel: Using GB pages for direct mapping Jan 23 05:38:24.343287 kernel: ACPI: Early table checksum verification disabled Jan 23 05:38:24.343295 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 05:38:24.343331 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343339 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343346 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343352 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 05:38:24.343359 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343366 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343375 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343382 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 05:38:24.343392 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 05:38:24.343400 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 05:38:24.343407 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 05:38:24.343414 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 05:38:24.343424 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 05:38:24.343431 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 05:38:24.343437 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 05:38:24.343444 kernel: No NUMA configuration found Jan 23 05:38:24.343451 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 05:38:24.343458 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 05:38:24.343467 kernel: Zone ranges: Jan 23 05:38:24.343475 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 05:38:24.343481 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 05:38:24.343488 kernel: Normal empty Jan 23 05:38:24.343495 kernel: Device empty Jan 23 05:38:24.343502 kernel: Movable zone start for each node Jan 23 05:38:24.343509 kernel: Early memory node ranges Jan 23 05:38:24.343515 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 05:38:24.343524 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 05:38:24.343531 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 05:38:24.343538 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 05:38:24.343545 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 05:38:24.343552 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 05:38:24.343559 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 05:38:24.343566 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 05:38:24.343575 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 05:38:24.343582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 05:38:24.343589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 05:38:24.343596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 05:38:24.343603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 05:38:24.343610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 05:38:24.343617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 05:38:24.343624 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 05:38:24.343633 kernel: TSC deadline timer available Jan 23 05:38:24.343640 kernel: CPU topo: Max. logical packages: 1 Jan 23 05:38:24.343647 kernel: CPU topo: Max. logical dies: 1 Jan 23 05:38:24.343654 kernel: CPU topo: Max. dies per package: 1 Jan 23 05:38:24.343661 kernel: CPU topo: Max. threads per core: 1 Jan 23 05:38:24.343668 kernel: CPU topo: Num. cores per package: 4 Jan 23 05:38:24.343674 kernel: CPU topo: Num. threads per package: 4 Jan 23 05:38:24.343681 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 05:38:24.343690 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 05:38:24.343697 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 05:38:24.343704 kernel: kvm-guest: setup PV sched yield Jan 23 05:38:24.343711 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 05:38:24.343718 kernel: Booting paravirtualized kernel on KVM Jan 23 05:38:24.343725 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 05:38:24.343732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 05:38:24.343741 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 05:38:24.343748 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 05:38:24.343755 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 05:38:24.343762 kernel: kvm-guest: PV spinlocks enabled Jan 23 05:38:24.343769 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 05:38:24.343776 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a92e3d1fa43eb937dfa610fcbbc2e2f315bddbcd68fb450286e9840385c92d1 Jan 23 05:38:24.343784 kernel: random: crng init done Jan 23 05:38:24.343793 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 05:38:24.343800 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 05:38:24.343807 kernel: Fallback order for Node 0: 0 Jan 23 05:38:24.343814 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 05:38:24.343821 kernel: Policy zone: DMA32 Jan 23 05:38:24.343828 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 05:38:24.343835 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 05:38:24.343844 kernel: ftrace: allocating 40128 entries in 157 pages Jan 23 05:38:24.343908 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 05:38:24.343924 kernel: Dynamic Preempt: voluntary Jan 23 05:38:24.343933 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 05:38:24.343944 kernel: rcu: RCU event tracing is enabled. Jan 23 05:38:24.343952 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 05:38:24.343960 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 05:38:24.343970 kernel: Rude variant of Tasks RCU enabled. Jan 23 05:38:24.343977 kernel: Tracing variant of Tasks RCU enabled. Jan 23 05:38:24.343983 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 05:38:24.343990 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 05:38:24.343998 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 05:38:24.344005 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 05:38:24.344012 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 05:38:24.344019 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 05:38:24.344028 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 05:38:24.344042 kernel: Console: colour VGA+ 80x25 Jan 23 05:38:24.344051 kernel: printk: legacy console [ttyS0] enabled Jan 23 05:38:24.344059 kernel: ACPI: Core revision 20240827 Jan 23 05:38:24.344066 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 05:38:24.344073 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 05:38:24.344080 kernel: x2apic enabled Jan 23 05:38:24.344088 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 05:38:24.344095 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 05:38:24.344105 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 05:38:24.344112 kernel: kvm-guest: setup PV IPIs Jan 23 05:38:24.344119 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 05:38:24.344127 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 05:38:24.344136 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 05:38:24.344144 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 05:38:24.344151 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 05:38:24.344158 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 05:38:24.344165 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 05:38:24.344178 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 05:38:24.344193 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 05:38:24.344210 kernel: Speculative Store Bypass: Vulnerable Jan 23 05:38:24.344222 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 05:38:24.344235 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 05:38:24.344247 kernel: active return thunk: srso_alias_return_thunk Jan 23 05:38:24.344260 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 05:38:24.344272 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 05:38:24.344286 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 05:38:24.344338 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 05:38:24.344353 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 05:38:24.344365 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 05:38:24.344373 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 05:38:24.344380 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 05:38:24.344388 kernel: Freeing SMP alternatives memory: 32K Jan 23 05:38:24.344395 kernel: pid_max: default: 32768 minimum: 301 Jan 23 05:38:24.344405 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 05:38:24.344413 kernel: landlock: Up and running. Jan 23 05:38:24.344420 kernel: SELinux: Initializing. Jan 23 05:38:24.344428 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 05:38:24.344439 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 05:38:24.344453 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 05:38:24.344466 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 05:38:24.344483 kernel: signal: max sigframe size: 1776 Jan 23 05:38:24.344495 kernel: rcu: Hierarchical SRCU implementation. Jan 23 05:38:24.344507 kernel: rcu: Max phase no-delay instances is 400. Jan 23 05:38:24.344519 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 05:38:24.344531 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 05:38:24.344544 kernel: smp: Bringing up secondary CPUs ... Jan 23 05:38:24.344556 kernel: smpboot: x86: Booting SMP configuration: Jan 23 05:38:24.344571 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 05:38:24.344583 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 05:38:24.344595 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 05:38:24.344608 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120520K reserved, 0K cma-reserved) Jan 23 05:38:24.344620 kernel: devtmpfs: initialized Jan 23 05:38:24.344632 kernel: x86/mm: Memory block size: 128MB Jan 23 05:38:24.344644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 05:38:24.344659 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 05:38:24.344671 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 05:38:24.344683 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 05:38:24.344695 kernel: audit: initializing netlink subsys (disabled) Jan 23 05:38:24.344708 kernel: audit: type=2000 audit(1769146701.018:1): state=initialized audit_enabled=0 res=1 Jan 23 05:38:24.344719 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 05:38:24.344731 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 05:38:24.344746 kernel: cpuidle: using governor menu Jan 23 05:38:24.344758 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 05:38:24.344770 kernel: dca service started, version 1.12.1 Jan 23 05:38:24.344782 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 05:38:24.344794 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 05:38:24.344807 kernel: PCI: Using configuration type 1 for base access Jan 23 05:38:24.344819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 05:38:24.344834 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 05:38:24.344846 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 05:38:24.344904 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 05:38:24.344918 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 05:38:24.344953 kernel: ACPI: Added _OSI(Module Device) Jan 23 05:38:24.344965 kernel: ACPI: Added _OSI(Processor Device) Jan 23 05:38:24.344977 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 05:38:24.344993 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 05:38:24.345005 kernel: ACPI: Interpreter enabled Jan 23 05:38:24.345017 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 05:38:24.345029 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 05:38:24.345042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 05:38:24.345054 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 05:38:24.345066 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 05:38:24.345082 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 05:38:24.345533 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 05:38:24.345991 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 05:38:24.346520 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 05:38:24.346796 kernel: PCI host bridge to bus 0000:00 Jan 23 05:38:24.347622 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 05:38:24.348569 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 05:38:24.349430 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 05:38:24.350126 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 05:38:24.350366 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 05:38:24.350530 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 05:38:24.350696 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 05:38:24.350964 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 05:38:24.351154 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 05:38:24.351365 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 05:38:24.351536 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 05:38:24.351701 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 05:38:24.351969 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 05:38:24.352238 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 05:38:24.352453 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 05:38:24.352638 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 05:38:24.352848 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 05:38:24.353172 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 05:38:24.353473 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 05:38:24.353713 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 05:38:24.353969 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 05:38:24.354157 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 05:38:24.354369 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 05:38:24.354544 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 05:38:24.354710 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 05:38:24.354949 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 05:38:24.355139 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 05:38:24.355342 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 05:38:24.355698 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 05:38:24.356084 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 05:38:24.356418 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 05:38:24.356727 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 05:38:24.357084 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 05:38:24.357106 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 05:38:24.357128 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 05:38:24.357144 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 05:38:24.357159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 05:38:24.357174 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 05:38:24.357188 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 05:38:24.357203 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 05:38:24.357217 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 05:38:24.357232 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 05:38:24.357251 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 05:38:24.357265 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 05:38:24.357280 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 05:38:24.357294 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 05:38:24.357339 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 05:38:24.357355 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 05:38:24.357370 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 05:38:24.357390 kernel: iommu: Default domain type: Translated Jan 23 05:38:24.357406 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 05:38:24.357421 kernel: PCI: Using ACPI for IRQ routing Jan 23 05:38:24.357435 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 05:38:24.357450 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 05:38:24.357465 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 05:38:24.357758 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 05:38:24.358102 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 05:38:24.358432 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 05:38:24.358454 kernel: vgaarb: loaded Jan 23 05:38:24.358470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 05:38:24.358485 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 05:38:24.358500 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 05:38:24.358515 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 05:38:24.358536 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 05:38:24.358551 kernel: pnp: PnP ACPI init Jan 23 05:38:24.358910 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 05:38:24.358935 kernel: pnp: PnP ACPI: found 6 devices Jan 23 05:38:24.358950 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 05:38:24.358965 kernel: NET: Registered PF_INET protocol family Jan 23 05:38:24.358986 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 05:38:24.359001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 05:38:24.359016 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 05:38:24.359031 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 05:38:24.359047 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 05:38:24.359061 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 05:38:24.359076 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 05:38:24.359095 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 05:38:24.359110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 05:38:24.359125 kernel: NET: Registered PF_XDP protocol family Jan 23 05:38:24.359438 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 05:38:24.359714 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 05:38:24.360050 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 05:38:24.360354 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 05:38:24.360641 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 05:38:24.360976 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 05:38:24.361000 kernel: PCI: CLS 0 bytes, default 64 Jan 23 05:38:24.361016 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 05:38:24.361031 kernel: Initialise system trusted keyrings Jan 23 05:38:24.361046 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 05:38:24.361060 kernel: Key type asymmetric registered Jan 23 05:38:24.361081 kernel: Asymmetric key parser 'x509' registered Jan 23 05:38:24.361095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 05:38:24.361110 kernel: io scheduler mq-deadline registered Jan 23 05:38:24.361124 kernel: io scheduler kyber registered Jan 23 05:38:24.361139 kernel: io scheduler bfq registered Jan 23 05:38:24.361154 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 05:38:24.361169 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 05:38:24.361227 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 05:38:24.361262 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 05:38:24.361295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 05:38:24.361340 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 05:38:24.361356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 05:38:24.361371 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 05:38:24.361386 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 05:38:24.361734 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 05:38:24.361758 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 05:38:24.362141 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 05:38:24.362467 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T05:38:22 UTC (1769146702) Jan 23 05:38:24.362829 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 05:38:24.362919 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 05:38:24.362945 kernel: NET: Registered PF_INET6 protocol family Jan 23 05:38:24.362960 kernel: Segment Routing with IPv6 Jan 23 05:38:24.362974 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 05:38:24.362988 kernel: NET: Registered PF_PACKET protocol family Jan 23 05:38:24.362999 kernel: Key type dns_resolver registered Jan 23 05:38:24.363013 kernel: IPI shorthand broadcast: enabled Jan 23 05:38:24.363025 kernel: sched_clock: Marking stable (2102016427, 426155651)->(2756666910, -228494832) Jan 23 05:38:24.363041 kernel: registered taskstats version 1 Jan 23 05:38:24.363053 kernel: Loading compiled-in X.509 certificates Jan 23 05:38:24.363066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: cda82b35a562b93154d436d51bdd40a1e66199a4' Jan 23 05:38:24.363078 kernel: Demotion targets for Node 0: null Jan 23 05:38:24.363090 kernel: Key type .fscrypt registered Jan 23 05:38:24.363102 kernel: Key type fscrypt-provisioning registered Jan 23 05:38:24.363114 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 05:38:24.363129 kernel: ima: Allocated hash algorithm: sha1 Jan 23 05:38:24.363141 kernel: ima: No architecture policies found Jan 23 05:38:24.363152 kernel: clk: Disabling unused clocks Jan 23 05:38:24.363165 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 23 05:38:24.363177 kernel: Write protecting the kernel read-only data: 47104k Jan 23 05:38:24.363189 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 23 05:38:24.363201 kernel: Run /init as init process Jan 23 05:38:24.363216 kernel: with arguments: Jan 23 05:38:24.363229 kernel: /init Jan 23 05:38:24.363241 kernel: with environment: Jan 23 05:38:24.363254 kernel: HOME=/ Jan 23 05:38:24.363266 kernel: TERM=linux Jan 23 05:38:24.363278 kernel: SCSI subsystem initialized Jan 23 05:38:24.363290 kernel: libata version 3.00 loaded. Jan 23 05:38:24.363582 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 05:38:24.363608 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 05:38:24.363844 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 05:38:24.364158 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 05:38:24.364435 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 05:38:24.364710 kernel: scsi host0: ahci Jan 23 05:38:24.365029 kernel: scsi host1: ahci Jan 23 05:38:24.365277 kernel: scsi host2: ahci Jan 23 05:38:24.365575 kernel: scsi host3: ahci Jan 23 05:38:24.365921 kernel: scsi host4: ahci Jan 23 05:38:24.366184 kernel: scsi host5: ahci Jan 23 05:38:24.366203 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 23 05:38:24.366221 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 23 05:38:24.366234 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 23 05:38:24.366246 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 23 05:38:24.366259 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 23 05:38:24.366271 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 23 05:38:24.366284 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 05:38:24.366299 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 05:38:24.366350 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 05:38:24.366362 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 05:38:24.366375 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 05:38:24.366388 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 05:38:24.366401 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 05:38:24.366413 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 05:38:24.366426 kernel: ata3.00: applying bridge limits Jan 23 05:38:24.366442 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 05:38:24.366454 kernel: ata3.00: configured for UDMA/100 Jan 23 05:38:24.366735 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 05:38:24.367066 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 05:38:24.367338 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 23 05:38:24.367358 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 05:38:24.367377 kernel: GPT:16515071 != 27000831 Jan 23 05:38:24.367390 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 05:38:24.367402 kernel: GPT:16515071 != 27000831 Jan 23 05:38:24.367414 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 05:38:24.367427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 05:38:24.367693 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 05:38:24.367716 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 05:38:24.368016 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 05:38:24.368035 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 05:38:24.368051 kernel: device-mapper: uevent: version 1.0.3 Jan 23 05:38:24.368064 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 05:38:24.368077 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 05:38:24.368090 kernel: raid6: avx2x4 gen() 22933 MB/s Jan 23 05:38:24.368107 kernel: raid6: avx2x2 gen() 23755 MB/s Jan 23 05:38:24.368120 kernel: raid6: avx2x1 gen() 16303 MB/s Jan 23 05:38:24.368132 kernel: raid6: using algorithm avx2x2 gen() 23755 MB/s Jan 23 05:38:24.368144 kernel: raid6: .... xor() 30408 MB/s, rmw enabled Jan 23 05:38:24.368157 kernel: raid6: using avx2x2 recovery algorithm Jan 23 05:38:24.368169 kernel: xor: automatically using best checksumming function avx Jan 23 05:38:24.368182 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 05:38:24.368195 kernel: BTRFS: device fsid aba401b0-51aa-4ae6-ba9d-750a765944b4 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 23 05:38:24.368210 kernel: BTRFS info (device dm-0): first mount of filesystem aba401b0-51aa-4ae6-ba9d-750a765944b4 Jan 23 05:38:24.368223 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 05:38:24.368236 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 05:38:24.368249 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 05:38:24.368268 kernel: loop: module loaded Jan 23 05:38:24.368281 kernel: loop0: detected capacity change from 0 to 100552 Jan 23 05:38:24.368296 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 05:38:24.368343 systemd[1]: Successfully made /usr/ read-only. Jan 23 05:38:24.368360 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 05:38:24.368374 systemd[1]: Detected virtualization kvm. Jan 23 05:38:24.368390 systemd[1]: Detected architecture x86-64. Jan 23 05:38:24.368402 systemd[1]: Running in initrd. Jan 23 05:38:24.368415 systemd[1]: No hostname configured, using default hostname. Jan 23 05:38:24.368428 systemd[1]: Hostname set to . Jan 23 05:38:24.368441 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 05:38:24.368455 systemd[1]: Queued start job for default target initrd.target. Jan 23 05:38:24.368470 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 05:38:24.368483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 05:38:24.368499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 05:38:24.368513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 05:38:24.368528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 05:38:24.368542 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 05:38:24.368559 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 05:38:24.368572 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 05:38:24.368586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 05:38:24.368599 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 05:38:24.368613 systemd[1]: Reached target paths.target - Path Units. Jan 23 05:38:24.368626 systemd[1]: Reached target slices.target - Slice Units. Jan 23 05:38:24.368639 systemd[1]: Reached target swap.target - Swaps. Jan 23 05:38:24.368655 systemd[1]: Reached target timers.target - Timer Units. Jan 23 05:38:24.368668 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 05:38:24.368681 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 05:38:24.368694 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 05:38:24.368707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 05:38:24.368720 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 05:38:24.368738 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 05:38:24.368751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 05:38:24.368764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 05:38:24.368778 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 05:38:24.368791 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 05:38:24.368806 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 05:38:24.368819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 05:38:24.368836 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 05:38:24.368850 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 05:38:24.368903 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 05:38:24.368917 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 05:38:24.368930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 05:38:24.368947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 05:38:24.368961 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 05:38:24.368975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 05:38:24.368988 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 05:38:24.369001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 05:38:24.369046 systemd-journald[316]: Collecting audit messages is enabled. Jan 23 05:38:24.369074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 05:38:24.369087 kernel: Bridge firewalling registered Jan 23 05:38:24.369103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 05:38:24.369118 systemd-journald[316]: Journal started Jan 23 05:38:24.369146 systemd-journald[316]: Runtime Journal (/run/log/journal/4061683f67df4ad8969fbeaf904ed548) is 6M, max 48.2M, 42.1M free. Jan 23 05:38:24.361520 systemd-modules-load[319]: Inserted module 'br_netfilter' Jan 23 05:38:24.486435 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 05:38:24.486464 kernel: audit: type=1130 audit(1769146704.471:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.497727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 05:38:24.511965 kernel: audit: type=1130 audit(1769146704.488:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.511996 kernel: audit: type=1130 audit(1769146704.498:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.512160 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 05:38:24.527992 kernel: audit: type=1130 audit(1769146704.514:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.531994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 05:38:24.538940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 05:38:24.542050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 05:38:24.561720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 05:38:24.579564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 05:38:24.585411 systemd-tmpfiles[341]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 05:38:24.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.592380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 05:38:24.609847 kernel: audit: type=1130 audit(1769146704.578:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.609923 kernel: audit: type=1130 audit(1769146704.601:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.602497 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 05:38:24.626948 kernel: audit: type=1130 audit(1769146704.606:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.627063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 05:38:24.644118 kernel: audit: type=1130 audit(1769146704.626:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.640829 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 05:38:24.653650 kernel: audit: type=1334 audit(1769146704.644:10): prog-id=6 op=LOAD Jan 23 05:38:24.644000 audit: BPF prog-id=6 op=LOAD Jan 23 05:38:24.649019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 05:38:24.689488 dracut-cmdline[357]: dracut-109 Jan 23 05:38:24.696072 dracut-cmdline[357]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a92e3d1fa43eb937dfa610fcbbc2e2f315bddbcd68fb450286e9840385c92d1 Jan 23 05:38:24.736898 systemd-resolved[358]: Positive Trust Anchors: Jan 23 05:38:24.736917 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 05:38:24.736922 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 05:38:24.736948 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 05:38:24.781758 systemd-resolved[358]: Defaulting to hostname 'linux'. Jan 23 05:38:24.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.783286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 05:38:24.787844 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 05:38:24.865952 kernel: Loading iSCSI transport class v2.0-870. Jan 23 05:38:24.881979 kernel: iscsi: registered transport (tcp) Jan 23 05:38:24.904939 kernel: iscsi: registered transport (qla4xxx) Jan 23 05:38:24.905028 kernel: QLogic iSCSI HBA Driver Jan 23 05:38:24.939538 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 05:38:24.980652 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 05:38:24.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:24.984098 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 05:38:25.048155 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 05:38:25.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.054424 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 05:38:25.056732 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 05:38:25.102938 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 05:38:25.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.109000 audit: BPF prog-id=7 op=LOAD Jan 23 05:38:25.109000 audit: BPF prog-id=8 op=LOAD Jan 23 05:38:25.110999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 05:38:25.149819 systemd-udevd[586]: Using default interface naming scheme 'v257'. Jan 23 05:38:25.166014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 05:38:25.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.170849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 05:38:25.205989 dracut-pre-trigger[641]: rd.md=0: removing MD RAID activation Jan 23 05:38:25.229457 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 05:38:25.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.235000 audit: BPF prog-id=9 op=LOAD Jan 23 05:38:25.237131 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 05:38:25.253114 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 05:38:25.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.261448 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 05:38:25.304410 systemd-networkd[723]: lo: Link UP Jan 23 05:38:25.304438 systemd-networkd[723]: lo: Gained carrier Jan 23 05:38:25.305205 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 05:38:25.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.309533 systemd[1]: Reached target network.target - Network. Jan 23 05:38:25.378061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 05:38:25.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.386600 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 05:38:25.436047 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 05:38:25.473169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 05:38:25.494800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 05:38:25.509449 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 05:38:25.516367 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 05:38:25.526069 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 05:38:25.537397 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 05:38:25.555913 kernel: AES CTR mode by8 optimization enabled Jan 23 05:38:25.560117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 05:38:25.563058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 05:38:25.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.573075 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 05:38:25.584737 disk-uuid[768]: Primary Header is updated. Jan 23 05:38:25.584737 disk-uuid[768]: Secondary Entries is updated. Jan 23 05:38:25.584737 disk-uuid[768]: Secondary Header is updated. Jan 23 05:38:25.576640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 05:38:25.584457 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 05:38:25.584464 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 05:38:25.586266 systemd-networkd[723]: eth0: Link UP Jan 23 05:38:25.586620 systemd-networkd[723]: eth0: Gained carrier Jan 23 05:38:25.586633 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 05:38:25.650202 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 05:38:25.725959 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 05:38:25.853796 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 23 05:38:25.853824 kernel: audit: type=1130 audit(1769146705.838:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.853974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 05:38:25.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.863509 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 05:38:25.875531 kernel: audit: type=1130 audit(1769146705.861:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.873097 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 05:38:25.880412 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 05:38:25.891948 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 05:38:25.935026 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 05:38:25.946490 kernel: audit: type=1130 audit(1769146705.936:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:25.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.663492 disk-uuid[788]: Warning: The kernel is still using the old partition table. Jan 23 05:38:26.663492 disk-uuid[788]: The new table will be used at the next reboot or after you Jan 23 05:38:26.663492 disk-uuid[788]: run partprobe(8) or kpartx(8) Jan 23 05:38:26.663492 disk-uuid[788]: The operation has completed successfully. Jan 23 05:38:26.680106 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 05:38:26.680404 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 05:38:26.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.692416 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 05:38:26.710440 kernel: audit: type=1130 audit(1769146706.689:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.710486 kernel: audit: type=1131 audit(1769146706.689:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.749845 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jan 23 05:38:26.749959 kernel: BTRFS info (device vda6): first mount of filesystem b80171df-62db-4edd-bd35-c5ece67b9079 Jan 23 05:38:26.754498 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 05:38:26.762021 kernel: BTRFS info (device vda6): turning on async discard Jan 23 05:38:26.762074 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 05:38:26.773940 kernel: BTRFS info (device vda6): last unmount of filesystem b80171df-62db-4edd-bd35-c5ece67b9079 Jan 23 05:38:26.775363 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 05:38:26.787704 kernel: audit: type=1130 audit(1769146706.775:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:26.777708 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 05:38:26.926814 ignition[878]: Ignition 2.24.0 Jan 23 05:38:26.926846 ignition[878]: Stage: fetch-offline Jan 23 05:38:26.926930 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 23 05:38:26.926943 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 05:38:26.927026 ignition[878]: parsed url from cmdline: "" Jan 23 05:38:26.927030 ignition[878]: no config URL provided Jan 23 05:38:26.927035 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 05:38:26.927046 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 23 05:38:26.927084 ignition[878]: op(1): [started] loading QEMU firmware config module Jan 23 05:38:26.927089 ignition[878]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 05:38:26.940299 ignition[878]: op(1): [finished] loading QEMU firmware config module Jan 23 05:38:26.940364 ignition[878]: QEMU firmware config was not found. Ignoring... Jan 23 05:38:27.120743 ignition[878]: parsing config with SHA512: 05ef53f69316c5925541880ceaeb6d2f442bd033396121a125621d5810638614ecb86b7eb361cc734afafea99ab941b8ed3db52c0c8837b384e9508e2cd40b50 Jan 23 05:38:27.128223 unknown[878]: fetched base config from "system" Jan 23 05:38:27.128261 unknown[878]: fetched user config from "qemu" Jan 23 05:38:27.128673 ignition[878]: fetch-offline: fetch-offline passed Jan 23 05:38:27.147703 kernel: audit: type=1130 audit(1769146707.136:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.132694 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 05:38:27.128754 ignition[878]: Ignition finished successfully Jan 23 05:38:27.138146 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 05:38:27.139633 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 05:38:27.191021 ignition[888]: Ignition 2.24.0 Jan 23 05:38:27.191046 ignition[888]: Stage: kargs Jan 23 05:38:27.191183 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 23 05:38:27.191192 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 05:38:27.211930 kernel: audit: type=1130 audit(1769146707.201:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.197923 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 05:38:27.192128 ignition[888]: kargs: kargs passed Jan 23 05:38:27.192194 ignition[888]: Ignition finished successfully Jan 23 05:38:27.217648 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 05:38:27.257283 ignition[895]: Ignition 2.24.0 Jan 23 05:38:27.257355 ignition[895]: Stage: disks Jan 23 05:38:27.257588 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 23 05:38:27.257603 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 05:38:27.258999 ignition[895]: disks: disks passed Jan 23 05:38:27.259046 ignition[895]: Ignition finished successfully Jan 23 05:38:27.271622 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 05:38:27.283691 kernel: audit: type=1130 audit(1769146707.273:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.275470 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 05:38:27.284641 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 05:38:27.293086 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 05:38:27.300558 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 05:38:27.305587 systemd[1]: Reached target basic.target - Basic System. Jan 23 05:38:27.316795 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 05:38:27.364611 systemd-fsck[904]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 23 05:38:27.374174 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 05:38:27.389257 kernel: audit: type=1130 audit(1769146707.375:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.377231 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 05:38:27.401030 systemd-networkd[723]: eth0: Gained IPv6LL Jan 23 05:38:27.531938 kernel: EXT4-fs (vda9): mounted filesystem 1d60f2e3-0272-4b1f-a3ba-f433a4e81bf1 r/w with ordered data mode. Quota mode: none. Jan 23 05:38:27.532680 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 05:38:27.538085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 05:38:27.545975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 05:38:27.550652 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 05:38:27.557520 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 05:38:27.557596 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 05:38:27.557624 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 05:38:27.582713 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 05:38:27.600634 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Jan 23 05:38:27.600659 kernel: BTRFS info (device vda6): first mount of filesystem b80171df-62db-4edd-bd35-c5ece67b9079 Jan 23 05:38:27.600671 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 05:38:27.589052 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 05:38:27.608617 kernel: BTRFS info (device vda6): turning on async discard Jan 23 05:38:27.608650 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 05:38:27.610539 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 05:38:27.820618 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 05:38:27.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.824582 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 05:38:27.830809 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 05:38:27.867619 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 05:38:27.873123 kernel: BTRFS info (device vda6): last unmount of filesystem b80171df-62db-4edd-bd35-c5ece67b9079 Jan 23 05:38:27.901239 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 05:38:27.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.921830 ignition[1013]: INFO : Ignition 2.24.0 Jan 23 05:38:27.924678 ignition[1013]: INFO : Stage: mount Jan 23 05:38:27.924678 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 05:38:27.924678 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 05:38:27.933067 ignition[1013]: INFO : mount: mount passed Jan 23 05:38:27.933067 ignition[1013]: INFO : Ignition finished successfully Jan 23 05:38:27.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:27.934501 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 05:38:27.937211 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 05:38:27.977987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 05:38:28.010960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Jan 23 05:38:28.016467 kernel: BTRFS info (device vda6): first mount of filesystem b80171df-62db-4edd-bd35-c5ece67b9079 Jan 23 05:38:28.016519 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 05:38:28.024245 kernel: BTRFS info (device vda6): turning on async discard Jan 23 05:38:28.024285 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 05:38:28.026841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 05:38:28.073013 ignition[1040]: INFO : Ignition 2.24.0 Jan 23 05:38:28.073013 ignition[1040]: INFO : Stage: files Jan 23 05:38:28.078436 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 05:38:28.078436 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 05:38:28.078436 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jan 23 05:38:28.078436 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 05:38:28.078436 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 05:38:28.099158 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 05:38:28.099158 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 05:38:28.099158 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 05:38:28.099158 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 05:38:28.099158 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 05:38:28.083404 unknown[1040]: wrote ssh authorized keys file for user: core Jan 23 05:38:28.145086 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 05:38:28.294205 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 05:38:28.294205 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 05:38:28.307009 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 05:38:28.605919 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 05:38:29.336623 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 05:38:29.336623 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 05:38:29.350557 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 05:38:29.359882 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 05:38:29.359882 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 05:38:29.370444 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 05:38:29.370444 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 05:38:29.370444 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 05:38:29.370444 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 05:38:29.370444 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 05:38:29.411571 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 05:38:29.422711 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 05:38:29.428248 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 05:38:29.428248 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 05:38:29.428248 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 05:38:29.428248 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 05:38:29.428248 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 05:38:29.428248 ignition[1040]: INFO : files: files passed Jan 23 05:38:29.428248 ignition[1040]: INFO : Ignition finished successfully Jan 23 05:38:29.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.439002 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 05:38:29.444089 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 05:38:29.477967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 05:38:29.481803 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 05:38:29.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.481992 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 05:38:29.518413 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 05:38:29.527235 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 05:38:29.527235 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 05:38:29.535513 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 05:38:29.542508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 05:38:29.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.546186 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 05:38:29.548908 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 05:38:29.631592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 05:38:29.631807 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 05:38:29.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.639587 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 05:38:29.648056 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 05:38:29.653355 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 05:38:29.654918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 05:38:29.691826 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 05:38:29.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.702791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 05:38:29.746454 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 05:38:29.746778 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 05:38:29.748435 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 05:38:29.762689 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 05:38:29.764684 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 05:38:29.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.764931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 05:38:29.776604 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 05:38:29.779722 systemd[1]: Stopped target basic.target - Basic System. Jan 23 05:38:29.788707 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 05:38:29.795071 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 05:38:29.811624 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 05:38:29.813966 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 05:38:29.821740 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 05:38:29.829070 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 05:38:29.834637 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 05:38:29.841828 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 05:38:29.848559 systemd[1]: Stopped target swap.target - Swaps. Jan 23 05:38:29.850529 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 05:38:29.850721 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 05:38:29.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.864031 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 05:38:29.866050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 05:38:29.871633 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 05:38:29.871941 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 05:38:29.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.877526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 05:38:29.877747 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 05:38:29.888980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 05:38:29.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.889195 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 05:38:29.895529 systemd[1]: Stopped target paths.target - Path Units. Jan 23 05:38:29.897786 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 05:38:29.904699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 05:38:29.908102 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 05:38:29.914819 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 05:38:29.925927 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 05:38:29.926085 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 05:38:29.931540 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 05:38:29.931678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 05:38:29.933816 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 05:38:29.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.934028 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 05:38:29.940665 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 05:38:29.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.940849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 05:38:29.950997 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 05:38:29.951125 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 05:38:29.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.957844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 05:38:29.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.960937 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 05:38:29.961087 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 05:38:29.963695 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 05:38:29.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.973832 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 05:38:29.974091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 05:38:30.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:29.982095 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 05:38:29.982262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 05:38:29.988820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 05:38:29.988994 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 05:38:30.004725 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 05:38:30.038088 ignition[1097]: INFO : Ignition 2.24.0 Jan 23 05:38:30.038088 ignition[1097]: INFO : Stage: umount Jan 23 05:38:30.038088 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 05:38:30.038088 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 05:38:30.038088 ignition[1097]: INFO : umount: umount passed Jan 23 05:38:30.038088 ignition[1097]: INFO : Ignition finished successfully Jan 23 05:38:30.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.008465 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 05:38:30.038418 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 05:38:30.043767 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 05:38:30.052641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 05:38:30.053131 systemd[1]: Stopped target network.target - Network. Jan 23 05:38:30.082619 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 05:38:30.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.082775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 05:38:30.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.084786 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 05:38:30.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.084932 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 05:38:30.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.091950 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 05:38:30.092028 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 05:38:30.095973 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 05:38:30.096038 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 05:38:30.105043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 05:38:30.111824 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 05:38:30.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.132075 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 05:38:30.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.132281 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 05:38:30.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.148000 audit: BPF prog-id=6 op=UNLOAD Jan 23 05:38:30.141217 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 05:38:30.141408 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 05:38:30.142730 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 05:38:30.142795 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 05:38:30.155539 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 05:38:30.155740 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 05:38:30.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.174069 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 05:38:30.176586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 05:38:30.176639 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 05:38:30.185000 audit: BPF prog-id=9 op=UNLOAD Jan 23 05:38:30.183649 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 05:38:30.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.189654 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 05:38:30.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.189741 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 05:38:30.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.198982 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 05:38:30.199088 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 05:38:30.201423 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 05:38:30.201483 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 05:38:30.201906 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 05:38:30.244815 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 05:38:30.245228 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 05:38:30.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.250945 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 05:38:30.251001 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 05:38:30.257773 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 05:38:30.257830 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 05:38:30.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.270482 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 05:38:30.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.270563 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 05:38:30.277538 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 05:38:30.277623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 05:38:30.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.285999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 05:38:30.286119 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 05:38:30.301009 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 05:38:30.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.308172 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 05:38:30.308253 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 05:38:30.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.315511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 05:38:30.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.315597 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 05:38:30.327765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 05:38:30.327924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 05:38:30.366833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 05:38:30.376087 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 05:38:30.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.384761 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 05:38:30.388103 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 05:38:30.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:30.398563 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 05:38:30.407113 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 05:38:30.434704 systemd[1]: Switching root. Jan 23 05:38:30.485476 systemd-journald[316]: Journal stopped Jan 23 05:38:32.427846 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Jan 23 05:38:32.427981 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 05:38:32.427997 kernel: SELinux: policy capability open_perms=1 Jan 23 05:38:32.428009 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 05:38:32.428020 kernel: SELinux: policy capability always_check_network=0 Jan 23 05:38:32.428035 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 05:38:32.428046 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 05:38:32.428057 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 05:38:32.428070 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 05:38:32.428081 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 05:38:32.428093 systemd[1]: Successfully loaded SELinux policy in 89.081ms. Jan 23 05:38:32.428111 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.774ms. Jan 23 05:38:32.428131 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 05:38:32.428143 systemd[1]: Detected virtualization kvm. Jan 23 05:38:32.428162 systemd[1]: Detected architecture x86-64. Jan 23 05:38:32.428188 systemd[1]: Detected first boot. Jan 23 05:38:32.428202 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 05:38:32.428218 zram_generator::config[1141]: No configuration found. Jan 23 05:38:32.428237 kernel: Guest personality initialized and is inactive Jan 23 05:38:32.428248 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 05:38:32.428261 kernel: Initialized host personality Jan 23 05:38:32.428275 kernel: NET: Registered PF_VSOCK protocol family Jan 23 05:38:32.428286 systemd[1]: Populated /etc with preset unit settings. Jan 23 05:38:32.428297 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 23 05:38:32.428309 kernel: audit: type=1334 audit(1769146711.866:86): prog-id=12 op=LOAD Jan 23 05:38:32.428320 kernel: audit: type=1334 audit(1769146711.866:87): prog-id=3 op=UNLOAD Jan 23 05:38:32.428372 kernel: audit: type=1334 audit(1769146711.866:88): prog-id=13 op=LOAD Jan 23 05:38:32.428385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 05:38:32.428399 kernel: audit: type=1334 audit(1769146711.866:89): prog-id=14 op=LOAD Jan 23 05:38:32.428410 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 05:38:32.428422 kernel: audit: type=1334 audit(1769146711.866:90): prog-id=4 op=UNLOAD Jan 23 05:38:32.428433 kernel: audit: type=1334 audit(1769146711.866:91): prog-id=5 op=UNLOAD Jan 23 05:38:32.428445 kernel: audit: type=1131 audit(1769146711.869:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.428456 kernel: audit: type=1334 audit(1769146711.899:93): prog-id=12 op=UNLOAD Jan 23 05:38:32.428468 kernel: audit: type=1130 audit(1769146711.902:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.428482 kernel: audit: type=1131 audit(1769146711.903:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.428493 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 05:38:32.428510 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 05:38:32.428522 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 05:38:32.428533 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 05:38:32.428545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 05:38:32.428560 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 05:38:32.428572 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 05:38:32.428584 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 05:38:32.428595 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 05:38:32.428607 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 05:38:32.428619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 05:38:32.428632 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 05:38:32.428644 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 05:38:32.428656 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 05:38:32.428677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 05:38:32.428694 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 05:38:32.428707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 05:38:32.428719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 05:38:32.428733 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 05:38:32.428745 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 05:38:32.428756 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 05:38:32.428768 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 05:38:32.428780 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 05:38:32.428791 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 05:38:32.428803 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 05:38:32.428816 systemd[1]: Reached target slices.target - Slice Units. Jan 23 05:38:32.428829 systemd[1]: Reached target swap.target - Swaps. Jan 23 05:38:32.428840 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 05:38:32.428893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 05:38:32.428913 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 05:38:32.428935 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 05:38:32.428949 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 05:38:32.428961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 05:38:32.428976 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 05:38:32.428988 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 05:38:32.429000 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 05:38:32.429011 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 05:38:32.429023 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 05:38:32.429034 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 05:38:32.429047 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 05:38:32.429071 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 05:38:32.429091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 05:38:32.429103 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 05:38:32.429115 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 05:38:32.429133 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 05:38:32.429153 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 05:38:32.429168 systemd[1]: Reached target machines.target - Containers. Jan 23 05:38:32.429181 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 05:38:32.429193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 05:38:32.429205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 05:38:32.429217 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 05:38:32.429228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 05:38:32.429240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 05:38:32.429253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 05:38:32.429265 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 05:38:32.429276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 05:38:32.429288 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 05:38:32.429299 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 05:38:32.429311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 05:38:32.429362 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 05:38:32.429380 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 05:38:32.429393 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 05:38:32.429404 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 05:38:32.429416 kernel: ACPI: bus type drm_connector registered Jan 23 05:38:32.429429 kernel: fuse: init (API version 7.41) Jan 23 05:38:32.429440 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 05:38:32.429452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 05:38:32.429464 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 05:38:32.429476 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 05:38:32.429488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 05:38:32.429522 systemd-journald[1220]: Collecting audit messages is enabled. Jan 23 05:38:32.429547 systemd-journald[1220]: Journal started Jan 23 05:38:32.429567 systemd-journald[1220]: Runtime Journal (/run/log/journal/4061683f67df4ad8969fbeaf904ed548) is 6M, max 48.2M, 42.1M free. Jan 23 05:38:32.438160 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 05:38:32.121000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 05:38:32.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.350000 audit: BPF prog-id=14 op=UNLOAD Jan 23 05:38:32.350000 audit: BPF prog-id=13 op=UNLOAD Jan 23 05:38:32.352000 audit: BPF prog-id=15 op=LOAD Jan 23 05:38:32.352000 audit: BPF prog-id=16 op=LOAD Jan 23 05:38:32.352000 audit: BPF prog-id=17 op=LOAD Jan 23 05:38:32.424000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 05:38:32.424000 audit[1220]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd64545860 a2=4000 a3=0 items=0 ppid=1 pid=1220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:32.424000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 05:38:31.852730 systemd[1]: Queued start job for default target multi-user.target. Jan 23 05:38:31.868764 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 05:38:31.869724 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 05:38:31.870284 systemd[1]: systemd-journald.service: Consumed 1.039s CPU time. Jan 23 05:38:32.442918 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 05:38:32.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.449632 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 05:38:32.453128 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 05:38:32.456306 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 05:38:32.459219 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 05:38:32.462629 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 05:38:32.466641 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 05:38:32.469722 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 05:38:32.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.474116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 05:38:32.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.479166 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 05:38:32.479545 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 05:38:32.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.484455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 05:38:32.484813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 05:38:32.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.489255 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 05:38:32.489641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 05:38:32.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.493816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 05:38:32.494423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 05:38:32.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.499247 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 05:38:32.499640 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 05:38:32.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.504187 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 05:38:32.504590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 05:38:32.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.508796 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 05:38:32.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.514100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 05:38:32.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.519114 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 05:38:32.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.523581 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 05:38:32.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.543030 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 05:38:32.548262 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 05:38:32.554252 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 05:38:32.558911 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 05:38:32.562391 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 05:38:32.562546 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 05:38:32.565356 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 05:38:32.570103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 05:38:32.570250 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 05:38:32.579423 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 05:38:32.585043 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 05:38:32.589138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 05:38:32.591175 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 05:38:32.595457 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 05:38:32.602030 systemd-journald[1220]: Time spent on flushing to /var/log/journal/4061683f67df4ad8969fbeaf904ed548 is 32.713ms for 1099 entries. Jan 23 05:38:32.602030 systemd-journald[1220]: System Journal (/var/log/journal/4061683f67df4ad8969fbeaf904ed548) is 8M, max 163.5M, 155.5M free. Jan 23 05:38:32.658517 systemd-journald[1220]: Received client request to flush runtime journal. Jan 23 05:38:32.658590 kernel: loop1: detected capacity change from 0 to 50784 Jan 23 05:38:32.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.599089 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 05:38:32.608266 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 05:38:32.613216 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 05:38:32.622448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 05:38:32.627444 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 05:38:32.632436 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 05:38:32.637221 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 05:38:32.644553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 05:38:32.653651 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 05:38:32.674167 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 05:38:32.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.680245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 05:38:32.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.702549 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 05:38:32.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.707000 audit: BPF prog-id=18 op=LOAD Jan 23 05:38:32.707000 audit: BPF prog-id=19 op=LOAD Jan 23 05:38:32.707000 audit: BPF prog-id=20 op=LOAD Jan 23 05:38:32.710807 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 05:38:32.712964 kernel: loop2: detected capacity change from 0 to 111560 Jan 23 05:38:32.717000 audit: BPF prog-id=21 op=LOAD Jan 23 05:38:32.720036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 05:38:32.725189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 05:38:32.729078 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 05:38:32.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.738000 audit: BPF prog-id=22 op=LOAD Jan 23 05:38:32.738000 audit: BPF prog-id=23 op=LOAD Jan 23 05:38:32.738000 audit: BPF prog-id=24 op=LOAD Jan 23 05:38:32.740170 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 05:38:32.743000 audit: BPF prog-id=25 op=LOAD Jan 23 05:38:32.743000 audit: BPF prog-id=26 op=LOAD Jan 23 05:38:32.743000 audit: BPF prog-id=27 op=LOAD Jan 23 05:38:32.747968 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 05:38:32.757238 kernel: loop3: detected capacity change from 0 to 224512 Jan 23 05:38:32.762432 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jan 23 05:38:32.762914 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jan 23 05:38:32.771138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 05:38:32.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.799930 kernel: loop4: detected capacity change from 0 to 50784 Jan 23 05:38:32.801175 systemd-nsresourced[1282]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 05:38:32.804627 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 05:38:32.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.818028 kernel: loop5: detected capacity change from 0 to 111560 Jan 23 05:38:32.820319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 05:38:32.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:32.845256 kernel: loop6: detected capacity change from 0 to 224512 Jan 23 05:38:32.858556 (sd-merge)[1289]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 23 05:38:32.864060 (sd-merge)[1289]: Merged extensions into '/usr'. Jan 23 05:38:32.871472 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 05:38:32.871653 systemd[1]: Reloading... Jan 23 05:38:32.929633 systemd-oomd[1277]: No swap; memory pressure usage will be degraded Jan 23 05:38:32.949447 systemd-resolved[1278]: Positive Trust Anchors: Jan 23 05:38:32.949476 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 05:38:32.949483 systemd-resolved[1278]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 05:38:32.949530 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 05:38:32.957752 systemd-resolved[1278]: Defaulting to hostname 'linux'. Jan 23 05:38:32.973513 zram_generator::config[1329]: No configuration found. Jan 23 05:38:33.182433 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 05:38:33.183145 systemd[1]: Reloading finished in 310 ms. Jan 23 05:38:33.226215 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 05:38:33.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.231251 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 05:38:33.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.236078 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 05:38:33.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.245300 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 05:38:33.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.255665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 05:38:33.279577 systemd[1]: Starting ensure-sysext.service... Jan 23 05:38:33.283539 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 05:38:33.286000 audit: BPF prog-id=8 op=UNLOAD Jan 23 05:38:33.286000 audit: BPF prog-id=7 op=UNLOAD Jan 23 05:38:33.287000 audit: BPF prog-id=28 op=LOAD Jan 23 05:38:33.287000 audit: BPF prog-id=29 op=LOAD Jan 23 05:38:33.289710 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 05:38:33.295000 audit: BPF prog-id=30 op=LOAD Jan 23 05:38:33.295000 audit: BPF prog-id=18 op=UNLOAD Jan 23 05:38:33.295000 audit: BPF prog-id=31 op=LOAD Jan 23 05:38:33.295000 audit: BPF prog-id=32 op=LOAD Jan 23 05:38:33.295000 audit: BPF prog-id=19 op=UNLOAD Jan 23 05:38:33.295000 audit: BPF prog-id=20 op=UNLOAD Jan 23 05:38:33.296000 audit: BPF prog-id=33 op=LOAD Jan 23 05:38:33.296000 audit: BPF prog-id=21 op=UNLOAD Jan 23 05:38:33.297000 audit: BPF prog-id=34 op=LOAD Jan 23 05:38:33.297000 audit: BPF prog-id=25 op=UNLOAD Jan 23 05:38:33.297000 audit: BPF prog-id=35 op=LOAD Jan 23 05:38:33.297000 audit: BPF prog-id=36 op=LOAD Jan 23 05:38:33.297000 audit: BPF prog-id=26 op=UNLOAD Jan 23 05:38:33.298000 audit: BPF prog-id=27 op=UNLOAD Jan 23 05:38:33.301000 audit: BPF prog-id=37 op=LOAD Jan 23 05:38:33.301000 audit: BPF prog-id=22 op=UNLOAD Jan 23 05:38:33.301000 audit: BPF prog-id=38 op=LOAD Jan 23 05:38:33.301000 audit: BPF prog-id=39 op=LOAD Jan 23 05:38:33.301000 audit: BPF prog-id=23 op=UNLOAD Jan 23 05:38:33.301000 audit: BPF prog-id=24 op=UNLOAD Jan 23 05:38:33.302000 audit: BPF prog-id=40 op=LOAD Jan 23 05:38:33.302000 audit: BPF prog-id=15 op=UNLOAD Jan 23 05:38:33.303000 audit: BPF prog-id=41 op=LOAD Jan 23 05:38:33.303000 audit: BPF prog-id=42 op=LOAD Jan 23 05:38:33.303000 audit: BPF prog-id=16 op=UNLOAD Jan 23 05:38:33.303000 audit: BPF prog-id=17 op=UNLOAD Jan 23 05:38:33.311458 systemd[1]: Reload requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Jan 23 05:38:33.311507 systemd[1]: Reloading... Jan 23 05:38:33.315095 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 05:38:33.315168 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 05:38:33.315655 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 05:38:33.317848 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Jan 23 05:38:33.318047 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Jan 23 05:38:33.327171 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 05:38:33.327198 systemd-tmpfiles[1372]: Skipping /boot Jan 23 05:38:33.329789 systemd-udevd[1373]: Using default interface naming scheme 'v257'. Jan 23 05:38:33.346537 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 05:38:33.346702 systemd-tmpfiles[1372]: Skipping /boot Jan 23 05:38:33.388930 zram_generator::config[1412]: No configuration found. Jan 23 05:38:33.490914 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 05:38:33.507938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 05:38:33.514934 kernel: ACPI: button: Power Button [PWRF] Jan 23 05:38:33.529946 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 05:38:33.534491 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 05:38:33.671580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 05:38:33.677212 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 05:38:33.677755 systemd[1]: Reloading finished in 365 ms. Jan 23 05:38:33.703550 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 05:38:33.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.711366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 05:38:33.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.719000 audit: BPF prog-id=43 op=LOAD Jan 23 05:38:33.719000 audit: BPF prog-id=30 op=UNLOAD Jan 23 05:38:33.719000 audit: BPF prog-id=44 op=LOAD Jan 23 05:38:33.719000 audit: BPF prog-id=45 op=LOAD Jan 23 05:38:33.719000 audit: BPF prog-id=31 op=UNLOAD Jan 23 05:38:33.719000 audit: BPF prog-id=32 op=UNLOAD Jan 23 05:38:33.721000 audit: BPF prog-id=46 op=LOAD Jan 23 05:38:33.721000 audit: BPF prog-id=37 op=UNLOAD Jan 23 05:38:33.721000 audit: BPF prog-id=47 op=LOAD Jan 23 05:38:33.721000 audit: BPF prog-id=48 op=LOAD Jan 23 05:38:33.721000 audit: BPF prog-id=38 op=UNLOAD Jan 23 05:38:33.722000 audit: BPF prog-id=39 op=UNLOAD Jan 23 05:38:33.723000 audit: BPF prog-id=49 op=LOAD Jan 23 05:38:33.723000 audit: BPF prog-id=40 op=UNLOAD Jan 23 05:38:33.723000 audit: BPF prog-id=50 op=LOAD Jan 23 05:38:33.723000 audit: BPF prog-id=51 op=LOAD Jan 23 05:38:33.723000 audit: BPF prog-id=41 op=UNLOAD Jan 23 05:38:33.723000 audit: BPF prog-id=42 op=UNLOAD Jan 23 05:38:33.723000 audit: BPF prog-id=52 op=LOAD Jan 23 05:38:33.724000 audit: BPF prog-id=53 op=LOAD Jan 23 05:38:33.724000 audit: BPF prog-id=28 op=UNLOAD Jan 23 05:38:33.724000 audit: BPF prog-id=29 op=UNLOAD Jan 23 05:38:33.726000 audit: BPF prog-id=54 op=LOAD Jan 23 05:38:33.784000 audit: BPF prog-id=33 op=UNLOAD Jan 23 05:38:33.785000 audit: BPF prog-id=55 op=LOAD Jan 23 05:38:33.785000 audit: BPF prog-id=34 op=UNLOAD Jan 23 05:38:33.785000 audit: BPF prog-id=56 op=LOAD Jan 23 05:38:33.785000 audit: BPF prog-id=57 op=LOAD Jan 23 05:38:33.785000 audit: BPF prog-id=35 op=UNLOAD Jan 23 05:38:33.785000 audit: BPF prog-id=36 op=UNLOAD Jan 23 05:38:33.809839 kernel: kvm_amd: TSC scaling supported Jan 23 05:38:33.810010 kernel: kvm_amd: Nested Virtualization enabled Jan 23 05:38:33.810050 kernel: kvm_amd: Nested Paging enabled Jan 23 05:38:33.814725 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 05:38:33.814763 kernel: kvm_amd: PMU virtualization is disabled Jan 23 05:38:33.862208 systemd[1]: Finished ensure-sysext.service. Jan 23 05:38:33.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:33.879945 kernel: EDAC MC: Ver: 3.0.0 Jan 23 05:38:33.890052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 05:38:33.891830 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 05:38:33.896316 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 05:38:33.900110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 05:38:33.916492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 05:38:33.922576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 05:38:33.929031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 05:38:33.935245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 05:38:33.939994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 05:38:33.940120 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 05:38:33.941663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 05:38:33.947624 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 05:38:33.952029 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 05:38:33.961382 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 05:38:33.966000 audit: BPF prog-id=58 op=LOAD Jan 23 05:38:33.969318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 05:38:33.971000 audit: BPF prog-id=59 op=LOAD Jan 23 05:38:33.974614 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 05:38:33.982421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 05:38:33.993030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 05:38:33.996586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 05:38:33.999261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 05:38:33.999725 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 05:38:34.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.005487 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 05:38:34.005823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 05:38:34.010000 audit[1512]: SYSTEM_BOOT pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.015599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 05:38:34.016007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 05:38:34.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.020602 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 05:38:34.022148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 05:38:34.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.028415 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 05:38:34.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.045636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 05:38:34.045826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 05:38:34.052787 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 05:38:34.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:34.068000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 05:38:34.068000 audit[1531]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4542bcd0 a2=420 a3=0 items=0 ppid=1487 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:34.068000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 05:38:34.070273 augenrules[1531]: No rules Jan 23 05:38:34.070804 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 05:38:34.072316 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 05:38:34.076724 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 05:38:34.088118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 05:38:34.091783 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 05:38:34.159099 systemd-networkd[1506]: lo: Link UP Jan 23 05:38:34.159132 systemd-networkd[1506]: lo: Gained carrier Jan 23 05:38:34.159650 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 05:38:34.165291 systemd-networkd[1506]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 05:38:34.165301 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 05:38:34.167526 systemd-networkd[1506]: eth0: Link UP Jan 23 05:38:34.168172 systemd-networkd[1506]: eth0: Gained carrier Jan 23 05:38:34.168188 systemd-networkd[1506]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 05:38:34.186955 systemd-networkd[1506]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 05:38:34.187798 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Jan 23 05:38:35.289798 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 05:38:35.289869 systemd-timesyncd[1508]: Initial clock synchronization to Fri 2026-01-23 05:38:35.289682 UTC. Jan 23 05:38:35.290035 systemd-resolved[1278]: Clock change detected. Flushing caches. Jan 23 05:38:35.420697 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 05:38:35.487765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 05:38:35.495447 systemd[1]: Reached target network.target - Network. Jan 23 05:38:35.498793 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 05:38:35.505595 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 05:38:35.512399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 05:38:35.549678 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 05:38:36.209433 ldconfig[1499]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 05:38:36.220741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 05:38:36.228416 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 05:38:36.429137 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 05:38:36.433951 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 05:38:36.437892 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 05:38:36.442366 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 05:38:36.446636 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 05:38:36.450772 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 05:38:36.454486 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 05:38:36.458695 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 05:38:36.462419 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 05:38:36.466335 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 05:38:36.476733 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 05:38:36.476897 systemd[1]: Reached target paths.target - Path Units. Jan 23 05:38:36.480778 systemd[1]: Reached target timers.target - Timer Units. Jan 23 05:38:36.484616 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 05:38:36.490920 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 05:38:36.497653 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 05:38:36.502481 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 05:38:36.506919 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 05:38:36.513257 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 05:38:36.517647 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 05:38:36.523222 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 05:38:36.530144 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 05:38:36.534220 systemd[1]: Reached target basic.target - Basic System. Jan 23 05:38:36.538277 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 05:38:36.538356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 05:38:36.543890 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 05:38:36.551359 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 05:38:36.555190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 05:38:36.573754 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 05:38:36.579408 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 05:38:36.582984 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 05:38:36.584965 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 05:38:36.588338 jq[1555]: false Jan 23 05:38:36.607419 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 05:38:36.617246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 05:38:36.625300 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 05:38:36.626278 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 23 05:38:36.626284 oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 23 05:38:36.628975 extend-filesystems[1556]: Found /dev/vda6 Jan 23 05:38:36.633028 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 05:38:36.637989 extend-filesystems[1556]: Found /dev/vda9 Jan 23 05:38:36.646221 extend-filesystems[1556]: Checking size of /dev/vda9 Jan 23 05:38:36.650474 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 05:38:36.655366 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 05:38:36.655587 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 23 05:38:36.655587 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 05:38:36.655498 oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 23 05:38:36.655573 oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 05:38:36.655749 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 23 05:38:36.655673 oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 23 05:38:36.656851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 05:38:36.661025 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 05:38:36.668675 extend-filesystems[1556]: Resized partition /dev/vda9 Jan 23 05:38:36.687629 extend-filesystems[1577]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 05:38:36.696860 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 23 05:38:36.680005 oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 23 05:38:36.686442 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 05:38:36.698487 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 23 05:38:36.698487 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 05:38:36.680029 oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 05:38:36.703293 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 05:38:36.712657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 05:38:36.714863 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 05:38:36.717671 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 05:38:36.718111 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 05:38:36.802144 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 23 05:38:36.871822 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 05:38:36.875788 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 05:38:36.884209 systemd-networkd[1506]: eth0: Gained IPv6LL Jan 23 05:38:36.888920 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 05:38:36.888920 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 05:38:36.888920 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 23 05:38:36.916420 extend-filesystems[1556]: Resized filesystem in /dev/vda9 Jan 23 05:38:36.889947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 05:38:36.892514 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 05:38:36.939521 jq[1579]: true Jan 23 05:38:36.893007 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 05:38:36.903885 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 05:38:36.904389 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 05:38:36.977411 tar[1591]: linux-amd64/LICENSE Jan 23 05:38:36.978305 jq[1596]: true Jan 23 05:38:36.979207 tar[1591]: linux-amd64/helm Jan 23 05:38:37.002228 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 05:38:37.009899 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 05:38:37.010578 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 05:38:37.010638 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 05:38:37.024742 update_engine[1572]: I20260123 05:38:37.024605 1572 main.cc:92] Flatcar Update Engine starting Jan 23 05:38:37.025256 dbus-daemon[1553]: [system] SELinux support is enabled Jan 23 05:38:37.031664 update_engine[1572]: I20260123 05:38:37.030746 1572 update_check_scheduler.cc:74] Next update check in 9m26s Jan 23 05:38:37.032017 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 05:38:37.035327 systemd-logind[1569]: New seat seat0. Jan 23 05:38:37.040813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:38:37.107852 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 05:38:37.126168 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 05:38:37.139191 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 05:38:37.139226 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 05:38:37.145326 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 05:38:37.145361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 05:38:37.150395 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 05:38:37.161418 systemd[1]: Started update-engine.service - Update Engine. Jan 23 05:38:37.163408 dbus-daemon[1553]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 05:38:37.182294 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 05:38:37.332468 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Jan 23 05:38:37.338824 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 05:38:37.355207 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 05:38:37.363587 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 05:38:37.373916 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 05:38:37.486445 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 05:38:37.499251 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 05:38:37.505241 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:40564.service - OpenSSH per-connection server daemon (10.0.0.1:40564). Jan 23 05:38:37.515252 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 05:38:37.515920 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 05:38:37.524780 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 05:38:37.651645 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 05:38:37.667015 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 05:38:37.667846 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 05:38:37.692352 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 05:38:37.846130 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 05:38:37.861682 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 05:38:37.872430 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 05:38:37.880419 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 05:38:38.202456 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 40564 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:38.204506 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:38.224921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 05:38:38.304144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 05:38:38.324520 systemd-logind[1569]: New session 1 of user core. Jan 23 05:38:38.499155 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 05:38:38.511176 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 05:38:38.655213 containerd[1597]: time="2026-01-23T05:38:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 05:38:38.655213 containerd[1597]: time="2026-01-23T05:38:38.652883087Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 05:38:38.839784 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:38.840187 containerd[1597]: time="2026-01-23T05:38:38.839033724Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="141.405µs" Jan 23 05:38:38.840187 containerd[1597]: time="2026-01-23T05:38:38.839816656Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 05:38:38.840187 containerd[1597]: time="2026-01-23T05:38:38.839924077Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 05:38:38.840187 containerd[1597]: time="2026-01-23T05:38:38.840021469Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 05:38:38.841287 containerd[1597]: time="2026-01-23T05:38:38.840604898Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 05:38:38.841287 containerd[1597]: time="2026-01-23T05:38:38.840644131Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 05:38:38.841287 containerd[1597]: time="2026-01-23T05:38:38.840848133Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 05:38:38.841287 containerd[1597]: time="2026-01-23T05:38:38.840875964Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.843735 containerd[1597]: time="2026-01-23T05:38:38.843685133Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.843735 containerd[1597]: time="2026-01-23T05:38:38.843727351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 05:38:38.843813 containerd[1597]: time="2026-01-23T05:38:38.843744954Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 05:38:38.843813 containerd[1597]: time="2026-01-23T05:38:38.843754142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.844408 containerd[1597]: time="2026-01-23T05:38:38.844032311Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.844408 containerd[1597]: time="2026-01-23T05:38:38.844101520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 05:38:38.844408 containerd[1597]: time="2026-01-23T05:38:38.844231623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.844712 containerd[1597]: time="2026-01-23T05:38:38.844613396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.844712 containerd[1597]: time="2026-01-23T05:38:38.844695720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 05:38:38.844712 containerd[1597]: time="2026-01-23T05:38:38.844710187Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 05:38:38.844796 containerd[1597]: time="2026-01-23T05:38:38.844775028Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 05:38:38.845587 containerd[1597]: time="2026-01-23T05:38:38.845423264Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 05:38:38.845587 containerd[1597]: time="2026-01-23T05:38:38.845531286Z" level=info msg="metadata content store policy set" policy=shared Jan 23 05:38:38.846966 systemd-logind[1569]: New session 2 of user core. Jan 23 05:38:38.856873 containerd[1597]: time="2026-01-23T05:38:38.856722684Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 05:38:38.856975 containerd[1597]: time="2026-01-23T05:38:38.856901908Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858149989Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858300930Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858330917Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858372815Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858393934Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858447484Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858620928Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858646215Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858666063Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858687172Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858702520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.858721687Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 05:38:38.859137 containerd[1597]: time="2026-01-23T05:38:38.859004214Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 05:38:38.859573 containerd[1597]: time="2026-01-23T05:38:38.859258679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 05:38:38.859573 containerd[1597]: time="2026-01-23T05:38:38.859294205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 05:38:38.859573 containerd[1597]: time="2026-01-23T05:38:38.859313311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 05:38:38.859573 containerd[1597]: time="2026-01-23T05:38:38.859331976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 05:38:38.859573 containerd[1597]: time="2026-01-23T05:38:38.859368083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 05:38:38.859747 containerd[1597]: time="2026-01-23T05:38:38.859680066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 05:38:38.859747 containerd[1597]: time="2026-01-23T05:38:38.859706956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.859856285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.859878447Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.859895528Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.860004843Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.860249309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.860293943Z" level=info msg="Start snapshots syncer" Jan 23 05:38:38.861168 containerd[1597]: time="2026-01-23T05:38:38.860385262Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 05:38:38.864198 containerd[1597]: time="2026-01-23T05:38:38.863020543Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 05:38:38.864198 containerd[1597]: time="2026-01-23T05:38:38.863641062Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 05:38:38.864692 containerd[1597]: time="2026-01-23T05:38:38.864314520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865585243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865631840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865651627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865666915Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865725846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865746845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865785727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865858763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865931449Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.865995229Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.866016879Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 05:38:38.866020 containerd[1597]: time="2026-01-23T05:38:38.866029653Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866044611Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866112578Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866131453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866168001Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866325755Z" level=info msg="runtime interface created" Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866339571Z" level=info msg="created NRI interface" Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866410544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866435781Z" level=info msg="Connect containerd service" Jan 23 05:38:38.866918 containerd[1597]: time="2026-01-23T05:38:38.866568679Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 05:38:38.885040 containerd[1597]: time="2026-01-23T05:38:38.884970326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 05:38:39.161103 kernel: hrtimer: interrupt took 3131858 ns Jan 23 05:38:39.358917 systemd[1679]: Queued start job for default target default.target. Jan 23 05:38:39.441580 systemd[1679]: Created slice app.slice - User Application Slice. Jan 23 05:38:39.441700 systemd[1679]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 05:38:39.441729 systemd[1679]: Reached target paths.target - Paths. Jan 23 05:38:39.441902 systemd[1679]: Reached target timers.target - Timers. Jan 23 05:38:39.457126 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 05:38:39.495431 systemd[1679]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 05:38:39.524775 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 05:38:39.525724 systemd[1679]: Reached target sockets.target - Sockets. Jan 23 05:38:39.630914 tar[1591]: linux-amd64/README.md Jan 23 05:38:39.641834 systemd[1679]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 05:38:39.642120 systemd[1679]: Reached target basic.target - Basic System. Jan 23 05:38:39.642234 systemd[1679]: Reached target default.target - Main User Target. Jan 23 05:38:39.642295 systemd[1679]: Startup finished in 775ms. Jan 23 05:38:39.643497 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 05:38:39.656587 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 05:38:39.664204 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 05:38:39.706467 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:40576.service - OpenSSH per-connection server daemon (10.0.0.1:40576). Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721125183Z" level=info msg="Start subscribing containerd event" Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721245439Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721274792Z" level=info msg="Start recovering state" Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721303347Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721775527Z" level=info msg="Start event monitor" Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721799992Z" level=info msg="Start cni network conf syncer for default" Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721835149Z" level=info msg="Start streaming server" Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721958459Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.721972294Z" level=info msg="runtime interface starting up..." Jan 23 05:38:39.722178 containerd[1597]: time="2026-01-23T05:38:39.722003062Z" level=info msg="starting plugins..." Jan 23 05:38:39.728467 containerd[1597]: time="2026-01-23T05:38:39.727930981Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 05:38:39.728628 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 05:38:39.734494 containerd[1597]: time="2026-01-23T05:38:39.734470214Z" level=info msg="containerd successfully booted in 1.084191s" Jan 23 05:38:39.863221 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 40576 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:39.865601 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:39.876524 systemd-logind[1569]: New session 3 of user core. Jan 23 05:38:40.035915 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 05:38:40.078528 sshd[1716]: Connection closed by 10.0.0.1 port 40576 Jan 23 05:38:40.079241 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:40.096727 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:40576.service: Deactivated successfully. Jan 23 05:38:40.099285 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 05:38:40.100748 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Jan 23 05:38:40.105742 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:40584.service - OpenSSH per-connection server daemon (10.0.0.1:40584). Jan 23 05:38:40.111254 systemd-logind[1569]: Removed session 3. Jan 23 05:38:40.319968 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 40584 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:40.322937 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:40.335604 systemd-logind[1569]: New session 4 of user core. Jan 23 05:38:40.350394 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 05:38:40.444708 sshd[1726]: Connection closed by 10.0.0.1 port 40584 Jan 23 05:38:40.451269 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:40.486434 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:40584.service: Deactivated successfully. Jan 23 05:38:40.490290 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 05:38:40.494435 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 23 05:38:40.496206 systemd-logind[1569]: Removed session 4. Jan 23 05:38:42.502125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:38:42.506915 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 05:38:42.511006 systemd[1]: Startup finished in 3.343s (kernel) + 6.866s (initrd) + 10.775s (userspace) = 20.984s. Jan 23 05:38:42.535767 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 05:38:44.347670 kubelet[1736]: E0123 05:38:44.347448 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 05:38:44.351753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 05:38:44.351955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 05:38:44.352537 systemd[1]: kubelet.service: Consumed 5.644s CPU time, 266.3M memory peak. Jan 23 05:38:50.470978 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:44232.service - OpenSSH per-connection server daemon (10.0.0.1:44232). Jan 23 05:38:50.541016 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 44232 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:50.543541 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:50.550536 systemd-logind[1569]: New session 5 of user core. Jan 23 05:38:50.565381 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 05:38:50.584541 sshd[1753]: Connection closed by 10.0.0.1 port 44232 Jan 23 05:38:50.584854 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:50.598523 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:44232.service: Deactivated successfully. Jan 23 05:38:50.600537 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 05:38:50.601807 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 23 05:38:50.605473 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:44242.service - OpenSSH per-connection server daemon (10.0.0.1:44242). Jan 23 05:38:50.606324 systemd-logind[1569]: Removed session 5. Jan 23 05:38:50.674677 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 44242 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:50.676403 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:50.682484 systemd-logind[1569]: New session 6 of user core. Jan 23 05:38:50.697490 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 05:38:50.710222 sshd[1763]: Connection closed by 10.0.0.1 port 44242 Jan 23 05:38:50.710465 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:50.724916 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:44242.service: Deactivated successfully. Jan 23 05:38:50.726855 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 05:38:50.727832 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 23 05:38:50.730635 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:44250.service - OpenSSH per-connection server daemon (10.0.0.1:44250). Jan 23 05:38:50.731322 systemd-logind[1569]: Removed session 6. Jan 23 05:38:50.807424 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 44250 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:50.809719 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:50.816328 systemd-logind[1569]: New session 7 of user core. Jan 23 05:38:50.826341 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 05:38:50.847148 sshd[1773]: Connection closed by 10.0.0.1 port 44250 Jan 23 05:38:50.847493 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:50.857758 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:44250.service: Deactivated successfully. Jan 23 05:38:50.859548 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 05:38:50.860869 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 23 05:38:50.864701 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:44252.service - OpenSSH per-connection server daemon (10.0.0.1:44252). Jan 23 05:38:50.865687 systemd-logind[1569]: Removed session 7. Jan 23 05:38:50.933815 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 44252 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:50.936382 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:50.943408 systemd-logind[1569]: New session 8 of user core. Jan 23 05:38:50.953363 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 05:38:50.983001 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 05:38:50.983429 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 05:38:51.007192 sudo[1784]: pam_unix(sudo:session): session closed for user root Jan 23 05:38:51.009383 sshd[1783]: Connection closed by 10.0.0.1 port 44252 Jan 23 05:38:51.009993 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:51.030329 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:44252.service: Deactivated successfully. Jan 23 05:38:51.032628 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 05:38:51.033879 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Jan 23 05:38:51.037383 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:44260.service - OpenSSH per-connection server daemon (10.0.0.1:44260). Jan 23 05:38:51.038387 systemd-logind[1569]: Removed session 8. Jan 23 05:38:51.123280 sshd[1791]: Accepted publickey for core from 10.0.0.1 port 44260 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:51.125686 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:51.132357 systemd-logind[1569]: New session 9 of user core. Jan 23 05:38:51.142339 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 05:38:51.161999 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 05:38:51.162456 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 05:38:51.166488 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 23 05:38:51.175828 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 05:38:51.176240 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 05:38:51.185433 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 05:38:51.234000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 05:38:51.235596 augenrules[1821]: No rules Jan 23 05:38:51.237743 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 23 05:38:51.237811 kernel: audit: type=1305 audit(1769146731.234:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 05:38:51.237680 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 05:38:51.238167 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 05:38:51.239606 sudo[1796]: pam_unix(sudo:session): session closed for user root Jan 23 05:38:51.241637 sshd[1795]: Connection closed by 10.0.0.1 port 44260 Jan 23 05:38:51.242216 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Jan 23 05:38:51.244124 kernel: audit: type=1300 audit(1769146731.234:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffeccd0db0 a2=420 a3=0 items=0 ppid=1802 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:51.234000 audit[1821]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffeccd0db0 a2=420 a3=0 items=0 ppid=1802 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:51.234000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 05:38:51.258652 kernel: audit: type=1327 audit(1769146731.234:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 05:38:51.258699 kernel: audit: type=1130 audit(1769146731.238:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.267107 kernel: audit: type=1131 audit(1769146731.238:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.276504 kernel: audit: type=1106 audit(1769146731.238:227): pid=1796 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.238000 audit[1796]: USER_END pid=1796 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.238000 audit[1796]: CRED_DISP pid=1796 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.295188 kernel: audit: type=1104 audit(1769146731.238:228): pid=1796 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.295252 kernel: audit: type=1106 audit(1769146731.243:229): pid=1791 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.243000 audit[1791]: USER_END pid=1791 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.308146 kernel: audit: type=1104 audit(1769146731.243:230): pid=1791 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.243000 audit[1791]: CRED_DISP pid=1791 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.324760 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:44260.service: Deactivated successfully. Jan 23 05:38:51.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.10:22-10.0.0.1:44260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.328020 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 05:38:51.329294 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Jan 23 05:38:51.333741 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:44264.service - OpenSSH per-connection server daemon (10.0.0.1:44264). Jan 23 05:38:51.334549 systemd-logind[1569]: Removed session 9. Jan 23 05:38:51.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.10:22-10.0.0.1:44264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.336119 kernel: audit: type=1131 audit(1769146731.323:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.10:22-10.0.0.1:44260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.380000 audit[1830]: USER_ACCT pid=1830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.381946 sshd[1830]: Accepted publickey for core from 10.0.0.1 port 44264 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:38:51.381000 audit[1830]: CRED_ACQ pid=1830 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.381000 audit[1830]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7be98e80 a2=3 a3=0 items=0 ppid=1 pid=1830 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:51.381000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:38:51.383749 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:38:51.390333 systemd-logind[1569]: New session 10 of user core. Jan 23 05:38:51.406460 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 05:38:51.408000 audit[1830]: USER_START pid=1830 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.410000 audit[1834]: CRED_ACQ pid=1834 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:38:51.421000 audit[1835]: USER_ACCT pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.422671 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 05:38:51.421000 audit[1835]: CRED_REFR pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.423217 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 05:38:51.422000 audit[1835]: USER_START pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:38:51.800813 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 05:38:51.820530 (dockerd)[1857]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 05:38:52.088501 dockerd[1857]: time="2026-01-23T05:38:52.088338077Z" level=info msg="Starting up" Jan 23 05:38:52.089814 dockerd[1857]: time="2026-01-23T05:38:52.089781892Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 05:38:52.107124 dockerd[1857]: time="2026-01-23T05:38:52.107037662Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 05:38:52.161176 dockerd[1857]: time="2026-01-23T05:38:52.161038783Z" level=info msg="Loading containers: start." Jan 23 05:38:52.174114 kernel: Initializing XFRM netlink socket Jan 23 05:38:52.243000 audit[1910]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.243000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef1aa4bf0 a2=0 a3=0 items=0 ppid=1857 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 05:38:52.247000 audit[1912]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.247000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe71970bd0 a2=0 a3=0 items=0 ppid=1857 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 05:38:52.250000 audit[1914]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.250000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff49f94070 a2=0 a3=0 items=0 ppid=1857 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 05:38:52.254000 audit[1916]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.254000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0bde16f0 a2=0 a3=0 items=0 ppid=1857 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 05:38:52.257000 audit[1918]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.257000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7f29f7f0 a2=0 a3=0 items=0 ppid=1857 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 05:38:52.260000 audit[1920]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.260000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffda9684ec0 a2=0 a3=0 items=0 ppid=1857 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 05:38:52.263000 audit[1922]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.263000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc3835c810 a2=0 a3=0 items=0 ppid=1857 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 05:38:52.266000 audit[1924]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.266000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd3ada8a10 a2=0 a3=0 items=0 ppid=1857 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 05:38:52.304000 audit[1927]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.304000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff114848d0 a2=0 a3=0 items=0 ppid=1857 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.304000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 05:38:52.307000 audit[1929]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.307000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff3158c8f0 a2=0 a3=0 items=0 ppid=1857 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 05:38:52.311000 audit[1931]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.311000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdcf216830 a2=0 a3=0 items=0 ppid=1857 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 05:38:52.314000 audit[1933]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.314000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffef32ef480 a2=0 a3=0 items=0 ppid=1857 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.314000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 05:38:52.318000 audit[1935]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.318000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc8a465670 a2=0 a3=0 items=0 ppid=1857 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 05:38:52.374000 audit[1965]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.374000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcbf412e20 a2=0 a3=0 items=0 ppid=1857 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 05:38:52.378000 audit[1967]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.378000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff42ac5a30 a2=0 a3=0 items=0 ppid=1857 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 05:38:52.381000 audit[1969]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.381000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec7e31810 a2=0 a3=0 items=0 ppid=1857 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.381000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 05:38:52.385000 audit[1971]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.385000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7c413b10 a2=0 a3=0 items=0 ppid=1857 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 05:38:52.388000 audit[1973]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.388000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc793c3ae0 a2=0 a3=0 items=0 ppid=1857 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 05:38:52.392000 audit[1975]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.392000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe6875fbc0 a2=0 a3=0 items=0 ppid=1857 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.392000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 05:38:52.395000 audit[1977]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.395000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc6277ec0 a2=0 a3=0 items=0 ppid=1857 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.395000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 05:38:52.400000 audit[1979]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.400000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffa4650fd0 a2=0 a3=0 items=0 ppid=1857 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 05:38:52.404000 audit[1981]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.404000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd619a2390 a2=0 a3=0 items=0 ppid=1857 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 05:38:52.408000 audit[1983]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.408000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe3fb6c480 a2=0 a3=0 items=0 ppid=1857 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 05:38:52.412000 audit[1985]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.412000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc52b27010 a2=0 a3=0 items=0 ppid=1857 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 05:38:52.416000 audit[1987]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.416000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc361488e0 a2=0 a3=0 items=0 ppid=1857 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.416000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 05:38:52.420000 audit[1989]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.420000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff58973790 a2=0 a3=0 items=0 ppid=1857 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.420000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 05:38:52.429000 audit[1994]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.429000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd3dfbda0 a2=0 a3=0 items=0 ppid=1857 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 05:38:52.434000 audit[1996]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.434000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe46672cb0 a2=0 a3=0 items=0 ppid=1857 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 05:38:52.438000 audit[1998]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.438000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdc497a6a0 a2=0 a3=0 items=0 ppid=1857 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.438000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 05:38:52.442000 audit[2000]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.442000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd788984a0 a2=0 a3=0 items=0 ppid=1857 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 05:38:52.445000 audit[2002]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.445000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffddb9b5430 a2=0 a3=0 items=0 ppid=1857 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 05:38:52.449000 audit[2004]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:38:52.449000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe3cc78f90 a2=0 a3=0 items=0 ppid=1857 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 05:38:52.470000 audit[2009]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.470000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc0a53f290 a2=0 a3=0 items=0 ppid=1857 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 05:38:52.475000 audit[2011]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.475000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd97bf4f70 a2=0 a3=0 items=0 ppid=1857 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.475000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 05:38:52.493000 audit[2019]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.493000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff57f33240 a2=0 a3=0 items=0 ppid=1857 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.493000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 05:38:52.513000 audit[2025]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.513000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff7e31ae50 a2=0 a3=0 items=0 ppid=1857 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 05:38:52.517000 audit[2027]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.517000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe567965f0 a2=0 a3=0 items=0 ppid=1857 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 05:38:52.521000 audit[2029]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.521000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc5ea76dc0 a2=0 a3=0 items=0 ppid=1857 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 05:38:52.525000 audit[2031]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.525000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffffc343460 a2=0 a3=0 items=0 ppid=1857 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.525000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 05:38:52.528000 audit[2033]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:38:52.528000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe1804f290 a2=0 a3=0 items=0 ppid=1857 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:38:52.528000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 05:38:52.530366 systemd-networkd[1506]: docker0: Link UP Jan 23 05:38:52.536337 dockerd[1857]: time="2026-01-23T05:38:52.536278625Z" level=info msg="Loading containers: done." Jan 23 05:38:52.555862 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck212902473-merged.mount: Deactivated successfully. Jan 23 05:38:52.558939 dockerd[1857]: time="2026-01-23T05:38:52.558847330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 05:38:52.558939 dockerd[1857]: time="2026-01-23T05:38:52.558937528Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 05:38:52.559126 dockerd[1857]: time="2026-01-23T05:38:52.559020062Z" level=info msg="Initializing buildkit" Jan 23 05:38:52.598465 dockerd[1857]: time="2026-01-23T05:38:52.598242967Z" level=info msg="Completed buildkit initialization" Jan 23 05:38:52.605638 dockerd[1857]: time="2026-01-23T05:38:52.605529223Z" level=info msg="Daemon has completed initialization" Jan 23 05:38:52.605798 dockerd[1857]: time="2026-01-23T05:38:52.605717523Z" level=info msg="API listen on /run/docker.sock" Jan 23 05:38:52.605890 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 05:38:52.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:53.330905 containerd[1597]: time="2026-01-23T05:38:53.330849139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 05:38:53.907813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688238721.mount: Deactivated successfully. Jan 23 05:38:54.549967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 05:38:54.552939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:38:55.041860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:38:55.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:38:55.059442 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 05:38:55.118372 kubelet[2142]: E0123 05:38:55.117952 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 05:38:55.124731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 05:38:55.124926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 05:38:55.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 05:38:55.125463 systemd[1]: kubelet.service: Consumed 504ms CPU time, 110.4M memory peak. Jan 23 05:38:55.426946 containerd[1597]: time="2026-01-23T05:38:55.426857601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:55.427798 containerd[1597]: time="2026-01-23T05:38:55.427761798Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 23 05:38:55.429321 containerd[1597]: time="2026-01-23T05:38:55.429275306Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:55.433119 containerd[1597]: time="2026-01-23T05:38:55.433033003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:55.434241 containerd[1597]: time="2026-01-23T05:38:55.434196917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.103276935s" Jan 23 05:38:55.434290 containerd[1597]: time="2026-01-23T05:38:55.434265766Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 05:38:55.434977 containerd[1597]: time="2026-01-23T05:38:55.434938081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 05:38:57.007793 containerd[1597]: time="2026-01-23T05:38:57.007706369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:57.008927 containerd[1597]: time="2026-01-23T05:38:57.008802000Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 23 05:38:57.010219 containerd[1597]: time="2026-01-23T05:38:57.010140871Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:57.012926 containerd[1597]: time="2026-01-23T05:38:57.012853851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:57.013877 containerd[1597]: time="2026-01-23T05:38:57.013809948Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.578848133s" Jan 23 05:38:57.013877 containerd[1597]: time="2026-01-23T05:38:57.013866694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 05:38:57.014698 containerd[1597]: time="2026-01-23T05:38:57.014641390Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 05:38:58.253307 containerd[1597]: time="2026-01-23T05:38:58.253182388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:58.253879 containerd[1597]: time="2026-01-23T05:38:58.253852023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 23 05:38:58.255128 containerd[1597]: time="2026-01-23T05:38:58.255095224Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:58.257877 containerd[1597]: time="2026-01-23T05:38:58.257810833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:38:58.258716 containerd[1597]: time="2026-01-23T05:38:58.258677557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.243990862s" Jan 23 05:38:58.258765 containerd[1597]: time="2026-01-23T05:38:58.258717292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 05:38:58.259361 containerd[1597]: time="2026-01-23T05:38:58.259330691Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 05:39:00.064974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140778865.mount: Deactivated successfully. Jan 23 05:39:01.201757 containerd[1597]: time="2026-01-23T05:39:01.200856806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:01.209554 containerd[1597]: time="2026-01-23T05:39:01.200818045Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 23 05:39:01.213578 containerd[1597]: time="2026-01-23T05:39:01.213481682Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:01.230851 containerd[1597]: time="2026-01-23T05:39:01.230701120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:01.231197 containerd[1597]: time="2026-01-23T05:39:01.231047875Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.97169391s" Jan 23 05:39:01.231197 containerd[1597]: time="2026-01-23T05:39:01.231187906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 05:39:01.232543 containerd[1597]: time="2026-01-23T05:39:01.232470482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 05:39:01.799729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956618126.mount: Deactivated successfully. Jan 23 05:39:03.390917 containerd[1597]: time="2026-01-23T05:39:03.390781334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:03.393948 containerd[1597]: time="2026-01-23T05:39:03.393896040Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 23 05:39:03.395793 containerd[1597]: time="2026-01-23T05:39:03.395295568Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:03.398469 containerd[1597]: time="2026-01-23T05:39:03.398370797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:03.399525 containerd[1597]: time="2026-01-23T05:39:03.399443631Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.166936129s" Jan 23 05:39:03.399525 containerd[1597]: time="2026-01-23T05:39:03.399487943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 05:39:03.401161 containerd[1597]: time="2026-01-23T05:39:03.401104273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 05:39:03.934509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175073536.mount: Deactivated successfully. Jan 23 05:39:03.989652 containerd[1597]: time="2026-01-23T05:39:03.987348559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 05:39:03.998322 containerd[1597]: time="2026-01-23T05:39:03.997297983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 05:39:04.015430 containerd[1597]: time="2026-01-23T05:39:04.015260138Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 05:39:04.026668 containerd[1597]: time="2026-01-23T05:39:04.026523668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 05:39:04.027566 containerd[1597]: time="2026-01-23T05:39:04.027521211Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.377484ms" Jan 23 05:39:04.027680 containerd[1597]: time="2026-01-23T05:39:04.027561285Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 05:39:04.028964 containerd[1597]: time="2026-01-23T05:39:04.028928639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 05:39:04.964880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146071588.mount: Deactivated successfully. Jan 23 05:39:05.300382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 05:39:05.304398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:39:05.889751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:39:05.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:05.897335 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 23 05:39:05.897437 kernel: audit: type=1130 audit(1769146745.889:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:05.930513 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 05:39:06.159038 kubelet[2249]: E0123 05:39:06.158854 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 05:39:06.164248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 05:39:06.164513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 05:39:06.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 05:39:06.165442 systemd[1]: kubelet.service: Consumed 755ms CPU time, 110.8M memory peak. Jan 23 05:39:06.177148 kernel: audit: type=1131 audit(1769146746.164:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 05:39:07.429673 containerd[1597]: time="2026-01-23T05:39:07.429535551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:07.431195 containerd[1597]: time="2026-01-23T05:39:07.431028804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 23 05:39:07.432965 containerd[1597]: time="2026-01-23T05:39:07.432911850Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:07.436288 containerd[1597]: time="2026-01-23T05:39:07.436213368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:07.437547 containerd[1597]: time="2026-01-23T05:39:07.437483996Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.408527225s" Jan 23 05:39:07.437547 containerd[1597]: time="2026-01-23T05:39:07.437533178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 05:39:10.257385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:39:10.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:10.257712 systemd[1]: kubelet.service: Consumed 755ms CPU time, 110.8M memory peak. Jan 23 05:39:10.261092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:39:10.264121 kernel: audit: type=1130 audit(1769146750.256:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:10.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:10.271196 kernel: audit: type=1131 audit(1769146750.256:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:10.296418 systemd[1]: Reload requested from client PID 2324 ('systemctl') (unit session-10.scope)... Jan 23 05:39:10.296450 systemd[1]: Reloading... Jan 23 05:39:10.537124 zram_generator::config[2371]: No configuration found. Jan 23 05:39:10.890865 systemd[1]: Reloading finished in 593 ms. Jan 23 05:39:10.924000 audit: BPF prog-id=63 op=LOAD Jan 23 05:39:10.930207 kernel: audit: type=1334 audit(1769146750.924:288): prog-id=63 op=LOAD Jan 23 05:39:10.925000 audit: BPF prog-id=43 op=UNLOAD Jan 23 05:39:10.925000 audit: BPF prog-id=64 op=LOAD Jan 23 05:39:10.925000 audit: BPF prog-id=65 op=LOAD Jan 23 05:39:10.936159 kernel: audit: type=1334 audit(1769146750.925:289): prog-id=43 op=UNLOAD Jan 23 05:39:10.936233 kernel: audit: type=1334 audit(1769146750.925:290): prog-id=64 op=LOAD Jan 23 05:39:10.936313 kernel: audit: type=1334 audit(1769146750.925:291): prog-id=65 op=LOAD Jan 23 05:39:10.925000 audit: BPF prog-id=44 op=UNLOAD Jan 23 05:39:10.940767 kernel: audit: type=1334 audit(1769146750.925:292): prog-id=44 op=UNLOAD Jan 23 05:39:10.940810 kernel: audit: type=1334 audit(1769146750.925:293): prog-id=45 op=UNLOAD Jan 23 05:39:10.925000 audit: BPF prog-id=45 op=UNLOAD Jan 23 05:39:10.943031 kernel: audit: type=1334 audit(1769146750.926:294): prog-id=66 op=LOAD Jan 23 05:39:10.926000 audit: BPF prog-id=66 op=LOAD Jan 23 05:39:10.945638 kernel: audit: type=1334 audit(1769146750.926:295): prog-id=58 op=UNLOAD Jan 23 05:39:10.926000 audit: BPF prog-id=58 op=UNLOAD Jan 23 05:39:10.927000 audit: BPF prog-id=67 op=LOAD Jan 23 05:39:10.950744 kernel: audit: type=1334 audit(1769146750.927:296): prog-id=67 op=LOAD Jan 23 05:39:10.950797 kernel: audit: type=1334 audit(1769146750.927:297): prog-id=59 op=UNLOAD Jan 23 05:39:10.927000 audit: BPF prog-id=59 op=UNLOAD Jan 23 05:39:10.927000 audit: BPF prog-id=68 op=LOAD Jan 23 05:39:10.928000 audit: BPF prog-id=46 op=UNLOAD Jan 23 05:39:10.928000 audit: BPF prog-id=69 op=LOAD Jan 23 05:39:10.928000 audit: BPF prog-id=70 op=LOAD Jan 23 05:39:10.928000 audit: BPF prog-id=47 op=UNLOAD Jan 23 05:39:10.928000 audit: BPF prog-id=48 op=UNLOAD Jan 23 05:39:10.928000 audit: BPF prog-id=71 op=LOAD Jan 23 05:39:10.928000 audit: BPF prog-id=72 op=LOAD Jan 23 05:39:10.928000 audit: BPF prog-id=52 op=UNLOAD Jan 23 05:39:10.928000 audit: BPF prog-id=53 op=UNLOAD Jan 23 05:39:10.931000 audit: BPF prog-id=73 op=LOAD Jan 23 05:39:10.931000 audit: BPF prog-id=60 op=UNLOAD Jan 23 05:39:10.931000 audit: BPF prog-id=74 op=LOAD Jan 23 05:39:10.931000 audit: BPF prog-id=75 op=LOAD Jan 23 05:39:10.931000 audit: BPF prog-id=61 op=UNLOAD Jan 23 05:39:10.931000 audit: BPF prog-id=62 op=UNLOAD Jan 23 05:39:10.932000 audit: BPF prog-id=76 op=LOAD Jan 23 05:39:10.932000 audit: BPF prog-id=49 op=UNLOAD Jan 23 05:39:10.932000 audit: BPF prog-id=77 op=LOAD Jan 23 05:39:10.933000 audit: BPF prog-id=78 op=LOAD Jan 23 05:39:10.933000 audit: BPF prog-id=50 op=UNLOAD Jan 23 05:39:10.933000 audit: BPF prog-id=51 op=UNLOAD Jan 23 05:39:10.936000 audit: BPF prog-id=79 op=LOAD Jan 23 05:39:10.936000 audit: BPF prog-id=54 op=UNLOAD Jan 23 05:39:10.937000 audit: BPF prog-id=80 op=LOAD Jan 23 05:39:10.937000 audit: BPF prog-id=55 op=UNLOAD Jan 23 05:39:10.937000 audit: BPF prog-id=81 op=LOAD Jan 23 05:39:10.937000 audit: BPF prog-id=82 op=LOAD Jan 23 05:39:10.937000 audit: BPF prog-id=56 op=UNLOAD Jan 23 05:39:10.937000 audit: BPF prog-id=57 op=UNLOAD Jan 23 05:39:11.017303 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 05:39:11.017480 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 05:39:11.018015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:39:11.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 05:39:11.018252 systemd[1]: kubelet.service: Consumed 239ms CPU time, 98.5M memory peak. Jan 23 05:39:11.021169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:39:11.323114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:39:11.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:11.344460 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 05:39:11.480027 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 05:39:11.480027 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 05:39:11.480027 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 05:39:11.480027 kubelet[2419]: I0123 05:39:11.480177 2419 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 05:39:11.800978 kubelet[2419]: I0123 05:39:11.800910 2419 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 05:39:11.800978 kubelet[2419]: I0123 05:39:11.800956 2419 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 05:39:11.801520 kubelet[2419]: I0123 05:39:11.801453 2419 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 05:39:11.844244 kubelet[2419]: I0123 05:39:11.844199 2419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 05:39:11.852874 kubelet[2419]: E0123 05:39:11.852785 2419 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:11.861643 kubelet[2419]: I0123 05:39:11.861561 2419 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 05:39:11.870023 kubelet[2419]: I0123 05:39:11.869941 2419 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 05:39:11.871248 kubelet[2419]: I0123 05:39:11.871173 2419 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 05:39:11.871768 kubelet[2419]: I0123 05:39:11.871227 2419 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 05:39:11.872710 kubelet[2419]: I0123 05:39:11.871899 2419 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 05:39:11.872710 kubelet[2419]: I0123 05:39:11.871910 2419 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 05:39:11.872710 kubelet[2419]: I0123 05:39:11.872551 2419 state_mem.go:36] "Initialized new in-memory state store" Jan 23 05:39:11.892552 kubelet[2419]: I0123 05:39:11.892469 2419 kubelet.go:446] "Attempting to sync node with API server" Jan 23 05:39:11.892552 kubelet[2419]: I0123 05:39:11.892550 2419 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 05:39:11.892753 kubelet[2419]: I0123 05:39:11.892591 2419 kubelet.go:352] "Adding apiserver pod source" Jan 23 05:39:11.892753 kubelet[2419]: I0123 05:39:11.892693 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 05:39:11.895954 kubelet[2419]: W0123 05:39:11.895899 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:11.896046 kubelet[2419]: E0123 05:39:11.896026 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:11.896183 kubelet[2419]: W0123 05:39:11.896131 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:11.896258 kubelet[2419]: E0123 05:39:11.896201 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:11.898897 kubelet[2419]: I0123 05:39:11.898802 2419 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 05:39:11.899368 kubelet[2419]: I0123 05:39:11.899310 2419 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 05:39:11.899498 kubelet[2419]: W0123 05:39:11.899465 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 05:39:11.903303 kubelet[2419]: I0123 05:39:11.903260 2419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 05:39:11.903381 kubelet[2419]: I0123 05:39:11.903361 2419 server.go:1287] "Started kubelet" Jan 23 05:39:11.904207 kubelet[2419]: I0123 05:39:11.904140 2419 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 05:39:11.905492 kubelet[2419]: I0123 05:39:11.905449 2419 server.go:479] "Adding debug handlers to kubelet server" Jan 23 05:39:11.906881 kubelet[2419]: I0123 05:39:11.906854 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 05:39:11.908327 kubelet[2419]: I0123 05:39:11.907444 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 05:39:11.908327 kubelet[2419]: I0123 05:39:11.907825 2419 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 05:39:11.908327 kubelet[2419]: I0123 05:39:11.908108 2419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 05:39:11.909982 kubelet[2419]: E0123 05:39:11.909711 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 05:39:11.909982 kubelet[2419]: I0123 05:39:11.909802 2419 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 05:39:11.910638 kubelet[2419]: I0123 05:39:11.910417 2419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 05:39:11.911802 kubelet[2419]: I0123 05:39:11.911758 2419 reconciler.go:26] "Reconciler: start to sync state" Jan 23 05:39:11.912989 kubelet[2419]: W0123 05:39:11.912872 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:11.913043 kubelet[2419]: E0123 05:39:11.912994 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:11.912000 audit[2432]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:11.912000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7d4e2f30 a2=0 a3=0 items=0 ppid=2419 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:11.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 05:39:11.914769 kubelet[2419]: I0123 05:39:11.913777 2419 factory.go:221] Registration of the systemd container factory successfully Jan 23 05:39:11.914769 kubelet[2419]: I0123 05:39:11.913988 2419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 05:39:11.914769 kubelet[2419]: E0123 05:39:11.914256 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Jan 23 05:39:11.914000 audit[2433]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:11.914000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe10511a70 a2=0 a3=0 items=0 ppid=2419 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:11.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 05:39:11.923975 kubelet[2419]: E0123 05:39:11.923334 2419 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 05:39:11.925042 kubelet[2419]: I0123 05:39:11.924990 2419 factory.go:221] Registration of the containerd container factory successfully Jan 23 05:39:11.924000 audit[2435]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:11.924000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff67a89d10 a2=0 a3=0 items=0 ppid=2419 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:11.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 05:39:11.963695 kubelet[2419]: E0123 05:39:11.947967 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d4598f6bb94f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 05:39:11.903307 +0000 UTC m=+0.544120070,LastTimestamp:2026-01-23 05:39:11.903307 +0000 UTC m=+0.544120070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 05:39:11.965000 audit[2439]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:11.965000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff5c843e30 a2=0 a3=0 items=0 ppid=2419 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:11.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 05:39:12.035187 kubelet[2419]: E0123 05:39:12.016359 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 05:39:12.088000 audit[2446]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:12.088000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff63c91260 a2=0 a3=0 items=0 ppid=2419 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 05:39:12.090809 kubelet[2419]: I0123 05:39:12.090300 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 05:39:12.091000 audit[2447]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:12.091000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf84952a0 a2=0 a3=0 items=0 ppid=2419 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 05:39:12.094000 audit[2448]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:12.094000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd77db6c20 a2=0 a3=0 items=0 ppid=2419 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.094000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 05:39:12.095000 audit[2449]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:12.095000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb9b927c0 a2=0 a3=0 items=0 ppid=2419 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 05:39:12.096998 kubelet[2419]: I0123 05:39:12.096902 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 05:39:12.097218 kubelet[2419]: I0123 05:39:12.097189 2419 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 05:39:12.097465 kubelet[2419]: I0123 05:39:12.097411 2419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 05:39:12.097465 kubelet[2419]: I0123 05:39:12.097462 2419 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 05:39:12.097721 kubelet[2419]: E0123 05:39:12.097597 2419 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 05:39:12.097000 audit[2451]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:12.097000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc47c8a780 a2=0 a3=0 items=0 ppid=2419 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 05:39:12.099000 audit[2452]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:12.099000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc390ec80 a2=0 a3=0 items=0 ppid=2419 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 05:39:12.102663 kubelet[2419]: W0123 05:39:12.102522 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:12.102663 kubelet[2419]: E0123 05:39:12.102580 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:12.102000 audit[2454]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:12.102000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb71257a0 a2=0 a3=0 items=0 ppid=2419 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 05:39:12.104258 kubelet[2419]: I0123 05:39:12.104228 2419 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 05:39:12.104258 kubelet[2419]: I0123 05:39:12.104251 2419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 05:39:12.104384 kubelet[2419]: I0123 05:39:12.104268 2419 state_mem.go:36] "Initialized new in-memory state store" Jan 23 05:39:12.104000 audit[2455]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:12.104000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc301da40 a2=0 a3=0 items=0 ppid=2419 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:12.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 05:39:12.115801 kubelet[2419]: E0123 05:39:12.115732 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Jan 23 05:39:12.136304 kubelet[2419]: E0123 05:39:12.136205 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 05:39:12.168322 kubelet[2419]: I0123 05:39:12.168217 2419 policy_none.go:49] "None policy: Start" Jan 23 05:39:12.168322 kubelet[2419]: I0123 05:39:12.168310 2419 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 05:39:12.168451 kubelet[2419]: I0123 05:39:12.168403 2419 state_mem.go:35] "Initializing new in-memory state store" Jan 23 05:39:12.188260 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 05:39:12.198209 kubelet[2419]: E0123 05:39:12.198132 2419 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 05:39:12.208034 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 05:39:12.213773 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 05:39:12.223998 kubelet[2419]: I0123 05:39:12.223666 2419 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 05:39:12.224499 kubelet[2419]: I0123 05:39:12.224048 2419 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 05:39:12.225219 kubelet[2419]: I0123 05:39:12.225131 2419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 05:39:12.226166 kubelet[2419]: I0123 05:39:12.226133 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 05:39:12.228271 kubelet[2419]: E0123 05:39:12.228192 2419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 05:39:12.228312 kubelet[2419]: E0123 05:39:12.228303 2419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 05:39:12.328047 kubelet[2419]: I0123 05:39:12.327975 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 05:39:12.328897 kubelet[2419]: E0123 05:39:12.328769 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 23 05:39:12.418239 kubelet[2419]: I0123 05:39:12.416477 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:12.418239 kubelet[2419]: I0123 05:39:12.416521 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:12.418239 kubelet[2419]: I0123 05:39:12.416546 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:12.418239 kubelet[2419]: I0123 05:39:12.416596 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:12.418239 kubelet[2419]: I0123 05:39:12.416668 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:12.418464 kubelet[2419]: I0123 05:39:12.416730 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:12.418464 kubelet[2419]: I0123 05:39:12.416754 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 23 05:39:12.418464 kubelet[2419]: I0123 05:39:12.416772 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:12.418464 kubelet[2419]: I0123 05:39:12.416798 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:12.418455 systemd[1]: Created slice kubepods-burstable-pod9e62de29e52e9d9fbf38a43951c67920.slice - libcontainer container kubepods-burstable-pod9e62de29e52e9d9fbf38a43951c67920.slice. Jan 23 05:39:12.438905 kubelet[2419]: E0123 05:39:12.438808 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:12.444162 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 23 05:39:12.447027 kubelet[2419]: E0123 05:39:12.446922 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:12.469987 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 23 05:39:12.473359 kubelet[2419]: E0123 05:39:12.473274 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:12.517097 kubelet[2419]: E0123 05:39:12.516961 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Jan 23 05:39:12.531106 kubelet[2419]: I0123 05:39:12.531020 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 05:39:12.531710 kubelet[2419]: E0123 05:39:12.531646 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 23 05:39:12.747421 kubelet[2419]: E0123 05:39:12.746861 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:12.748043 kubelet[2419]: E0123 05:39:12.747960 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:12.751792 containerd[1597]: time="2026-01-23T05:39:12.751549755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:12.752592 containerd[1597]: time="2026-01-23T05:39:12.751641510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e62de29e52e9d9fbf38a43951c67920,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:12.776509 kubelet[2419]: E0123 05:39:12.776402 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:12.786117 containerd[1597]: time="2026-01-23T05:39:12.785924042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:12.887649 containerd[1597]: time="2026-01-23T05:39:12.887442617Z" level=info msg="connecting to shim 886d6fbe3e314caf6c7aa5c744d90f8b54dc8d19f9c7c165d86f5982edcdfd93" address="unix:///run/containerd/s/1be7b683eeafc9f32116becc046c56035f0530b70806a009388470132e35d64c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:12.909964 containerd[1597]: time="2026-01-23T05:39:12.909795450Z" level=info msg="connecting to shim fe2ce8b94ca02781c7e6f9e3aa9d81ceee3e048f10d19463086e9f3bf30985e8" address="unix:///run/containerd/s/6bc90140779b34a5a9884f67655a1ea43dbd409afb53b309ce1713956c305075" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:12.910549 containerd[1597]: time="2026-01-23T05:39:12.910516199Z" level=info msg="connecting to shim e849694c89287eb59d4c23295738dcae2ffb9855d0adb2ed0ce7a34403bad585" address="unix:///run/containerd/s/d9c0e29d307a3d4989ad8eeee5bb0b4f575c046faa79ebaeb78770dfaa456ecf" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:12.937011 kubelet[2419]: I0123 05:39:12.936948 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 05:39:12.937579 kubelet[2419]: E0123 05:39:12.937505 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 23 05:39:12.952228 systemd[1]: Started cri-containerd-fe2ce8b94ca02781c7e6f9e3aa9d81ceee3e048f10d19463086e9f3bf30985e8.scope - libcontainer container fe2ce8b94ca02781c7e6f9e3aa9d81ceee3e048f10d19463086e9f3bf30985e8. Jan 23 05:39:13.239409 kubelet[2419]: W0123 05:39:13.239281 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:13.239534 kubelet[2419]: E0123 05:39:13.239412 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:13.243268 systemd[1]: Started cri-containerd-886d6fbe3e314caf6c7aa5c744d90f8b54dc8d19f9c7c165d86f5982edcdfd93.scope - libcontainer container 886d6fbe3e314caf6c7aa5c744d90f8b54dc8d19f9c7c165d86f5982edcdfd93. Jan 23 05:39:13.243937 kubelet[2419]: W0123 05:39:13.243874 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:13.243986 kubelet[2419]: E0123 05:39:13.243937 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:13.249505 systemd[1]: Started cri-containerd-e849694c89287eb59d4c23295738dcae2ffb9855d0adb2ed0ce7a34403bad585.scope - libcontainer container e849694c89287eb59d4c23295738dcae2ffb9855d0adb2ed0ce7a34403bad585. Jan 23 05:39:13.252000 audit: BPF prog-id=83 op=LOAD Jan 23 05:39:13.255000 audit: BPF prog-id=84 op=LOAD Jan 23 05:39:13.255000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.255000 audit: BPF prog-id=84 op=UNLOAD Jan 23 05:39:13.255000 audit[2500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.255000 audit: BPF prog-id=85 op=LOAD Jan 23 05:39:13.255000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.256000 audit: BPF prog-id=86 op=LOAD Jan 23 05:39:13.256000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.256000 audit: BPF prog-id=86 op=UNLOAD Jan 23 05:39:13.256000 audit[2500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.256000 audit: BPF prog-id=85 op=UNLOAD Jan 23 05:39:13.256000 audit[2500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.283000 audit: BPF prog-id=87 op=LOAD Jan 23 05:39:13.286000 audit: BPF prog-id=88 op=LOAD Jan 23 05:39:13.286000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f4238 a2=98 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.286000 audit: BPF prog-id=88 op=UNLOAD Jan 23 05:39:13.286000 audit[2503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.287000 audit: BPF prog-id=89 op=LOAD Jan 23 05:39:13.287000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.287000 audit: BPF prog-id=90 op=LOAD Jan 23 05:39:13.287000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.287000 audit: BPF prog-id=90 op=UNLOAD Jan 23 05:39:13.287000 audit[2503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.287000 audit: BPF prog-id=89 op=UNLOAD Jan 23 05:39:13.287000 audit[2503]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.287000 audit: BPF prog-id=91 op=LOAD Jan 23 05:39:13.288000 audit: BPF prog-id=92 op=LOAD Jan 23 05:39:13.288000 audit: BPF prog-id=93 op=LOAD Jan 23 05:39:13.288000 audit[2500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2484 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665326365386239346361303237383163376536663965336161396438 Jan 23 05:39:13.287000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f46e8 a2=98 a3=0 items=0 ppid=2466 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838366436666265336533313463616636633761613563373434643930 Jan 23 05:39:13.289000 audit: BPF prog-id=94 op=LOAD Jan 23 05:39:13.289000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.289000 audit: BPF prog-id=94 op=UNLOAD Jan 23 05:39:13.289000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.289000 audit: BPF prog-id=95 op=LOAD Jan 23 05:39:13.289000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.289000 audit: BPF prog-id=96 op=LOAD Jan 23 05:39:13.289000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.290000 audit: BPF prog-id=96 op=UNLOAD Jan 23 05:39:13.290000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.290000 audit: BPF prog-id=95 op=UNLOAD Jan 23 05:39:13.290000 audit[2532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.290000 audit: BPF prog-id=97 op=LOAD Jan 23 05:39:13.290000 audit[2532]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2487 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538343936393463383932383765623539643463323332393537333864 Jan 23 05:39:13.318021 kubelet[2419]: E0123 05:39:13.317981 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Jan 23 05:39:13.389156 containerd[1597]: time="2026-01-23T05:39:13.383586413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e62de29e52e9d9fbf38a43951c67920,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe2ce8b94ca02781c7e6f9e3aa9d81ceee3e048f10d19463086e9f3bf30985e8\"" Jan 23 05:39:13.392788 containerd[1597]: time="2026-01-23T05:39:13.392745744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"886d6fbe3e314caf6c7aa5c744d90f8b54dc8d19f9c7c165d86f5982edcdfd93\"" Jan 23 05:39:13.394937 kubelet[2419]: E0123 05:39:13.394821 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:13.395502 kubelet[2419]: E0123 05:39:13.395376 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:13.411728 containerd[1597]: time="2026-01-23T05:39:13.411177050Z" level=info msg="CreateContainer within sandbox \"fe2ce8b94ca02781c7e6f9e3aa9d81ceee3e048f10d19463086e9f3bf30985e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 05:39:13.412312 kubelet[2419]: W0123 05:39:13.412231 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:13.412312 kubelet[2419]: E0123 05:39:13.412326 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:13.417839 containerd[1597]: time="2026-01-23T05:39:13.405293472Z" level=info msg="CreateContainer within sandbox \"886d6fbe3e314caf6c7aa5c744d90f8b54dc8d19f9c7c165d86f5982edcdfd93\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 05:39:13.500199 kubelet[2419]: W0123 05:39:13.499912 2419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 23 05:39:13.500199 kubelet[2419]: E0123 05:39:13.500198 2419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Jan 23 05:39:13.515102 containerd[1597]: time="2026-01-23T05:39:13.514967439Z" level=info msg="Container 4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:13.519325 containerd[1597]: time="2026-01-23T05:39:13.519278643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e849694c89287eb59d4c23295738dcae2ffb9855d0adb2ed0ce7a34403bad585\"" Jan 23 05:39:13.521715 kubelet[2419]: E0123 05:39:13.521363 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:13.524209 containerd[1597]: time="2026-01-23T05:39:13.524180940Z" level=info msg="CreateContainer within sandbox \"e849694c89287eb59d4c23295738dcae2ffb9855d0adb2ed0ce7a34403bad585\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 05:39:13.527100 containerd[1597]: time="2026-01-23T05:39:13.527023470Z" level=info msg="CreateContainer within sandbox \"886d6fbe3e314caf6c7aa5c744d90f8b54dc8d19f9c7c165d86f5982edcdfd93\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4\"" Jan 23 05:39:13.527770 containerd[1597]: time="2026-01-23T05:39:13.527748251Z" level=info msg="StartContainer for \"4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4\"" Jan 23 05:39:13.529151 containerd[1597]: time="2026-01-23T05:39:13.529111412Z" level=info msg="connecting to shim 4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4" address="unix:///run/containerd/s/1be7b683eeafc9f32116becc046c56035f0530b70806a009388470132e35d64c" protocol=ttrpc version=3 Jan 23 05:39:13.529689 containerd[1597]: time="2026-01-23T05:39:13.529639841Z" level=info msg="Container 0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:13.541983 containerd[1597]: time="2026-01-23T05:39:13.541893603Z" level=info msg="CreateContainer within sandbox \"fe2ce8b94ca02781c7e6f9e3aa9d81ceee3e048f10d19463086e9f3bf30985e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a\"" Jan 23 05:39:13.543390 containerd[1597]: time="2026-01-23T05:39:13.543286181Z" level=info msg="StartContainer for \"0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a\"" Jan 23 05:39:13.543481 containerd[1597]: time="2026-01-23T05:39:13.543399651Z" level=info msg="Container 4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:13.546037 containerd[1597]: time="2026-01-23T05:39:13.545890607Z" level=info msg="connecting to shim 0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a" address="unix:///run/containerd/s/6bc90140779b34a5a9884f67655a1ea43dbd409afb53b309ce1713956c305075" protocol=ttrpc version=3 Jan 23 05:39:13.552860 containerd[1597]: time="2026-01-23T05:39:13.552798148Z" level=info msg="CreateContainer within sandbox \"e849694c89287eb59d4c23295738dcae2ffb9855d0adb2ed0ce7a34403bad585\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac\"" Jan 23 05:39:13.553575 containerd[1597]: time="2026-01-23T05:39:13.553516648Z" level=info msg="StartContainer for \"4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac\"" Jan 23 05:39:13.555967 containerd[1597]: time="2026-01-23T05:39:13.555901425Z" level=info msg="connecting to shim 4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac" address="unix:///run/containerd/s/d9c0e29d307a3d4989ad8eeee5bb0b4f575c046faa79ebaeb78770dfaa456ecf" protocol=ttrpc version=3 Jan 23 05:39:13.562513 systemd[1]: Started cri-containerd-4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4.scope - libcontainer container 4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4. Jan 23 05:39:13.593592 systemd[1]: Started cri-containerd-0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a.scope - libcontainer container 0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a. Jan 23 05:39:13.607000 audit: BPF prog-id=98 op=LOAD Jan 23 05:39:13.608000 audit: BPF prog-id=99 op=LOAD Jan 23 05:39:13.608000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.609000 audit: BPF prog-id=99 op=UNLOAD Jan 23 05:39:13.609000 audit[2593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.610000 audit: BPF prog-id=100 op=LOAD Jan 23 05:39:13.610000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.610000 audit: BPF prog-id=101 op=LOAD Jan 23 05:39:13.610000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.611000 audit: BPF prog-id=101 op=UNLOAD Jan 23 05:39:13.611000 audit[2593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.611000 audit: BPF prog-id=100 op=UNLOAD Jan 23 05:39:13.611000 audit[2593]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.611000 audit: BPF prog-id=102 op=LOAD Jan 23 05:39:13.611000 audit[2593]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2466 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463316135336139386165393935316264393939303138343839663637 Jan 23 05:39:13.624436 systemd[1]: Started cri-containerd-4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac.scope - libcontainer container 4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac. Jan 23 05:39:13.630000 audit: BPF prog-id=103 op=LOAD Jan 23 05:39:13.631000 audit: BPF prog-id=104 op=LOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.631000 audit: BPF prog-id=104 op=UNLOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.631000 audit: BPF prog-id=105 op=LOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.631000 audit: BPF prog-id=106 op=LOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.631000 audit: BPF prog-id=106 op=UNLOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.631000 audit: BPF prog-id=105 op=UNLOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.631000 audit: BPF prog-id=107 op=LOAD Jan 23 05:39:13.631000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2484 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065643262653532373362316565303634633431386638333430376665 Jan 23 05:39:13.657000 audit: BPF prog-id=108 op=LOAD Jan 23 05:39:13.658000 audit: BPF prog-id=109 op=LOAD Jan 23 05:39:13.658000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c238 a2=98 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.658000 audit: BPF prog-id=109 op=UNLOAD Jan 23 05:39:13.658000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.659000 audit: BPF prog-id=110 op=LOAD Jan 23 05:39:13.659000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.659000 audit: BPF prog-id=111 op=LOAD Jan 23 05:39:13.659000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.659000 audit: BPF prog-id=111 op=UNLOAD Jan 23 05:39:13.659000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.659000 audit: BPF prog-id=110 op=UNLOAD Jan 23 05:39:13.659000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.659000 audit: BPF prog-id=112 op=LOAD Jan 23 05:39:13.659000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=2487 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:13.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466353262653832356132626461633462313532353965653064666638 Jan 23 05:39:13.709511 containerd[1597]: time="2026-01-23T05:39:13.709361195Z" level=info msg="StartContainer for \"0ed2be5273b1ee064c418f83407fe2fb87cc07bea7687d2628d4d3bfc4690b8a\" returns successfully" Jan 23 05:39:13.737952 containerd[1597]: time="2026-01-23T05:39:13.737883494Z" level=info msg="StartContainer for \"4c1a53a98ae9951bd999018489f679d7fe4418e26611785edb4ce9275fd9b6f4\" returns successfully" Jan 23 05:39:13.744860 kubelet[2419]: I0123 05:39:13.744797 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 05:39:13.747966 kubelet[2419]: E0123 05:39:13.747833 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 23 05:39:14.042334 containerd[1597]: time="2026-01-23T05:39:14.042249820Z" level=info msg="StartContainer for \"4f52be825a2bdac4b15259ee0dff8686b164eb1edd1a52319f95c38ae2d78cac\" returns successfully" Jan 23 05:39:14.254287 kubelet[2419]: E0123 05:39:14.254219 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:14.254484 kubelet[2419]: E0123 05:39:14.254444 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:14.258084 kubelet[2419]: E0123 05:39:14.257601 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:14.258084 kubelet[2419]: E0123 05:39:14.257760 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:14.287166 kubelet[2419]: E0123 05:39:14.286469 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:14.288863 kubelet[2419]: E0123 05:39:14.288800 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:15.381195 kubelet[2419]: E0123 05:39:15.380818 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:15.383515 kubelet[2419]: E0123 05:39:15.382901 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:15.383515 kubelet[2419]: I0123 05:39:15.382481 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 05:39:15.391845 kubelet[2419]: E0123 05:39:15.391783 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:15.393954 kubelet[2419]: E0123 05:39:15.393923 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:16.382292 kubelet[2419]: E0123 05:39:16.382180 2419 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 05:39:16.382970 kubelet[2419]: E0123 05:39:16.382364 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:16.643226 kubelet[2419]: E0123 05:39:16.642996 2419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 05:39:16.707097 kubelet[2419]: I0123 05:39:16.706982 2419 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 05:39:16.707097 kubelet[2419]: E0123 05:39:16.707022 2419 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 05:39:16.713886 kubelet[2419]: I0123 05:39:16.713502 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:16.752803 kubelet[2419]: E0123 05:39:16.752568 2419 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d4598f6bb94f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 05:39:11.903307 +0000 UTC m=+0.544120070,LastTimestamp:2026-01-23 05:39:11.903307 +0000 UTC m=+0.544120070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 05:39:16.767909 kubelet[2419]: E0123 05:39:16.767825 2419 kubelet.go:3196] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:16.767909 kubelet[2419]: I0123 05:39:16.767861 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 05:39:16.823892 kubelet[2419]: E0123 05:39:16.823815 2419 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 05:39:16.823892 kubelet[2419]: I0123 05:39:16.823865 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:16.825520 kubelet[2419]: E0123 05:39:16.825474 2419 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:17.025090 kubelet[2419]: I0123 05:39:17.024932 2419 apiserver.go:52] "Watching apiserver" Jan 23 05:39:17.111493 kubelet[2419]: I0123 05:39:17.111440 2419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 05:39:18.858919 kubelet[2419]: I0123 05:39:18.858830 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:18.868111 kubelet[2419]: E0123 05:39:18.868012 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:18.874779 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-10.scope)... Jan 23 05:39:18.874829 systemd[1]: Reloading... Jan 23 05:39:18.975235 zram_generator::config[2739]: No configuration found. Jan 23 05:39:19.301550 kubelet[2419]: I0123 05:39:19.301234 2419 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:19.363871 kubelet[2419]: E0123 05:39:19.363760 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:19.391875 kubelet[2419]: E0123 05:39:19.391547 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:19.392433 kubelet[2419]: E0123 05:39:19.391957 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:19.487035 systemd[1]: Reloading finished in 611 ms. Jan 23 05:39:19.518972 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:39:19.538857 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 05:39:19.539318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:39:19.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:19.539409 systemd[1]: kubelet.service: Consumed 2.708s CPU time, 132.5M memory peak. Jan 23 05:39:19.541359 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 23 05:39:19.541432 kernel: audit: type=1131 audit(1769146759.538:390): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:19.542966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 05:39:19.543000 audit: BPF prog-id=113 op=LOAD Jan 23 05:39:19.550543 kernel: audit: type=1334 audit(1769146759.543:391): prog-id=113 op=LOAD Jan 23 05:39:19.543000 audit: BPF prog-id=114 op=LOAD Jan 23 05:39:19.553887 kernel: audit: type=1334 audit(1769146759.543:392): prog-id=114 op=LOAD Jan 23 05:39:19.554106 kernel: audit: type=1334 audit(1769146759.543:393): prog-id=71 op=UNLOAD Jan 23 05:39:19.543000 audit: BPF prog-id=71 op=UNLOAD Jan 23 05:39:19.543000 audit: BPF prog-id=72 op=UNLOAD Jan 23 05:39:19.559569 kernel: audit: type=1334 audit(1769146759.543:394): prog-id=72 op=UNLOAD Jan 23 05:39:19.559684 kernel: audit: type=1334 audit(1769146759.544:395): prog-id=115 op=LOAD Jan 23 05:39:19.544000 audit: BPF prog-id=115 op=LOAD Jan 23 05:39:19.544000 audit: BPF prog-id=68 op=UNLOAD Jan 23 05:39:19.566122 kernel: audit: type=1334 audit(1769146759.544:396): prog-id=68 op=UNLOAD Jan 23 05:39:19.544000 audit: BPF prog-id=116 op=LOAD Jan 23 05:39:19.544000 audit: BPF prog-id=117 op=LOAD Jan 23 05:39:19.584324 kernel: audit: type=1334 audit(1769146759.544:397): prog-id=116 op=LOAD Jan 23 05:39:19.584394 kernel: audit: type=1334 audit(1769146759.544:398): prog-id=117 op=LOAD Jan 23 05:39:19.584418 kernel: audit: type=1334 audit(1769146759.544:399): prog-id=69 op=UNLOAD Jan 23 05:39:19.544000 audit: BPF prog-id=69 op=UNLOAD Jan 23 05:39:19.544000 audit: BPF prog-id=70 op=UNLOAD Jan 23 05:39:19.545000 audit: BPF prog-id=118 op=LOAD Jan 23 05:39:19.545000 audit: BPF prog-id=67 op=UNLOAD Jan 23 05:39:19.546000 audit: BPF prog-id=119 op=LOAD Jan 23 05:39:19.546000 audit: BPF prog-id=79 op=UNLOAD Jan 23 05:39:19.548000 audit: BPF prog-id=120 op=LOAD Jan 23 05:39:19.561000 audit: BPF prog-id=63 op=UNLOAD Jan 23 05:39:19.561000 audit: BPF prog-id=121 op=LOAD Jan 23 05:39:19.561000 audit: BPF prog-id=122 op=LOAD Jan 23 05:39:19.561000 audit: BPF prog-id=64 op=UNLOAD Jan 23 05:39:19.561000 audit: BPF prog-id=65 op=UNLOAD Jan 23 05:39:19.563000 audit: BPF prog-id=123 op=LOAD Jan 23 05:39:19.563000 audit: BPF prog-id=66 op=UNLOAD Jan 23 05:39:19.567000 audit: BPF prog-id=124 op=LOAD Jan 23 05:39:19.567000 audit: BPF prog-id=76 op=UNLOAD Jan 23 05:39:19.568000 audit: BPF prog-id=125 op=LOAD Jan 23 05:39:19.568000 audit: BPF prog-id=126 op=LOAD Jan 23 05:39:19.568000 audit: BPF prog-id=77 op=UNLOAD Jan 23 05:39:19.568000 audit: BPF prog-id=78 op=UNLOAD Jan 23 05:39:19.571000 audit: BPF prog-id=127 op=LOAD Jan 23 05:39:19.572000 audit: BPF prog-id=73 op=UNLOAD Jan 23 05:39:19.572000 audit: BPF prog-id=128 op=LOAD Jan 23 05:39:19.572000 audit: BPF prog-id=129 op=LOAD Jan 23 05:39:19.572000 audit: BPF prog-id=74 op=UNLOAD Jan 23 05:39:19.572000 audit: BPF prog-id=75 op=UNLOAD Jan 23 05:39:19.573000 audit: BPF prog-id=130 op=LOAD Jan 23 05:39:19.573000 audit: BPF prog-id=80 op=UNLOAD Jan 23 05:39:19.573000 audit: BPF prog-id=131 op=LOAD Jan 23 05:39:19.573000 audit: BPF prog-id=132 op=LOAD Jan 23 05:39:19.573000 audit: BPF prog-id=81 op=UNLOAD Jan 23 05:39:19.573000 audit: BPF prog-id=82 op=UNLOAD Jan 23 05:39:19.843743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 05:39:19.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:19.853473 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 05:39:19.917638 kubelet[2784]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 05:39:19.917638 kubelet[2784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 05:39:19.917638 kubelet[2784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 05:39:19.917638 kubelet[2784]: I0123 05:39:19.917381 2784 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 05:39:19.928256 kubelet[2784]: I0123 05:39:19.928214 2784 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 05:39:19.928256 kubelet[2784]: I0123 05:39:19.928246 2784 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 05:39:19.928455 kubelet[2784]: I0123 05:39:19.928447 2784 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 05:39:19.929698 kubelet[2784]: I0123 05:39:19.929661 2784 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 05:39:19.931975 kubelet[2784]: I0123 05:39:19.931898 2784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 05:39:19.937771 kubelet[2784]: I0123 05:39:19.937749 2784 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 05:39:19.944713 kubelet[2784]: I0123 05:39:19.944681 2784 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 05:39:19.944981 kubelet[2784]: I0123 05:39:19.944896 2784 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 05:39:19.945121 kubelet[2784]: I0123 05:39:19.944944 2784 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 05:39:19.945309 kubelet[2784]: I0123 05:39:19.945133 2784 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 05:39:19.945309 kubelet[2784]: I0123 05:39:19.945143 2784 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 05:39:19.945309 kubelet[2784]: I0123 05:39:19.945188 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 23 05:39:19.945368 kubelet[2784]: I0123 05:39:19.945356 2784 kubelet.go:446] "Attempting to sync node with API server" Jan 23 05:39:19.945390 kubelet[2784]: I0123 05:39:19.945375 2784 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 05:39:19.945434 kubelet[2784]: I0123 05:39:19.945399 2784 kubelet.go:352] "Adding apiserver pod source" Jan 23 05:39:19.945434 kubelet[2784]: I0123 05:39:19.945410 2784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 05:39:19.945957 kubelet[2784]: I0123 05:39:19.945921 2784 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 05:39:19.946273 kubelet[2784]: I0123 05:39:19.946250 2784 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 05:39:19.946723 kubelet[2784]: I0123 05:39:19.946681 2784 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 05:39:19.946723 kubelet[2784]: I0123 05:39:19.946719 2784 server.go:1287] "Started kubelet" Jan 23 05:39:19.949255 kubelet[2784]: I0123 05:39:19.948988 2784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 05:39:19.960100 kubelet[2784]: I0123 05:39:19.959955 2784 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 05:39:19.961396 kubelet[2784]: I0123 05:39:19.961279 2784 server.go:479] "Adding debug handlers to kubelet server" Jan 23 05:39:19.963889 kubelet[2784]: I0123 05:39:19.963804 2784 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 05:39:19.964110 kubelet[2784]: I0123 05:39:19.964016 2784 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 05:39:19.964259 kubelet[2784]: I0123 05:39:19.964205 2784 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 05:39:19.964542 kubelet[2784]: I0123 05:39:19.964526 2784 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 05:39:19.964883 kubelet[2784]: I0123 05:39:19.964836 2784 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 05:39:19.965159 kubelet[2784]: I0123 05:39:19.965146 2784 reconciler.go:26] "Reconciler: start to sync state" Jan 23 05:39:19.967283 kubelet[2784]: E0123 05:39:19.967150 2784 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 05:39:19.975481 kubelet[2784]: I0123 05:39:19.975354 2784 factory.go:221] Registration of the containerd container factory successfully Jan 23 05:39:19.975809 kubelet[2784]: I0123 05:39:19.975746 2784 factory.go:221] Registration of the systemd container factory successfully Jan 23 05:39:19.976657 kubelet[2784]: I0123 05:39:19.976142 2784 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 05:39:19.981250 kubelet[2784]: I0123 05:39:19.981193 2784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 05:39:19.989601 kubelet[2784]: I0123 05:39:19.989568 2784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 05:39:19.989863 kubelet[2784]: I0123 05:39:19.989852 2784 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 05:39:19.990164 kubelet[2784]: I0123 05:39:19.990151 2784 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 05:39:19.990532 kubelet[2784]: I0123 05:39:19.990520 2784 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 05:39:19.990775 kubelet[2784]: E0123 05:39:19.990720 2784 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 05:39:20.069153 kubelet[2784]: I0123 05:39:20.069111 2784 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 05:39:20.069409 kubelet[2784]: I0123 05:39:20.069394 2784 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 05:39:20.069542 kubelet[2784]: I0123 05:39:20.069531 2784 state_mem.go:36] "Initialized new in-memory state store" Jan 23 05:39:20.069969 kubelet[2784]: I0123 05:39:20.069953 2784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 05:39:20.071491 kubelet[2784]: I0123 05:39:20.070177 2784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 05:39:20.071491 kubelet[2784]: I0123 05:39:20.070206 2784 policy_none.go:49] "None policy: Start" Jan 23 05:39:20.071491 kubelet[2784]: I0123 05:39:20.070216 2784 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 05:39:20.071491 kubelet[2784]: I0123 05:39:20.070228 2784 state_mem.go:35] "Initializing new in-memory state store" Jan 23 05:39:20.071491 kubelet[2784]: I0123 05:39:20.070564 2784 state_mem.go:75] "Updated machine memory state" Jan 23 05:39:20.083408 kubelet[2784]: I0123 05:39:20.083389 2784 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 05:39:20.083850 kubelet[2784]: I0123 05:39:20.083834 2784 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 05:39:20.084518 kubelet[2784]: I0123 05:39:20.084286 2784 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 05:39:20.084917 kubelet[2784]: I0123 05:39:20.084904 2784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 05:39:20.087184 kubelet[2784]: E0123 05:39:20.087138 2784 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 05:39:20.094250 kubelet[2784]: I0123 05:39:20.094155 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:20.096526 kubelet[2784]: I0123 05:39:20.095271 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.096788 kubelet[2784]: I0123 05:39:20.095367 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 05:39:20.109206 kubelet[2784]: E0123 05:39:20.108965 2784 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:20.110092 kubelet[2784]: E0123 05:39:20.110014 2784 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.168957 kubelet[2784]: I0123 05:39:20.168872 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:20.169166 kubelet[2784]: I0123 05:39:20.168970 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:20.169166 kubelet[2784]: I0123 05:39:20.169131 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.169236 kubelet[2784]: I0123 05:39:20.169182 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 23 05:39:20.169320 kubelet[2784]: I0123 05:39:20.169261 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e62de29e52e9d9fbf38a43951c67920-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e62de29e52e9d9fbf38a43951c67920\") " pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:20.169320 kubelet[2784]: I0123 05:39:20.169308 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.169367 kubelet[2784]: I0123 05:39:20.169335 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.169367 kubelet[2784]: I0123 05:39:20.169350 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.169367 kubelet[2784]: I0123 05:39:20.169364 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 05:39:20.198583 kubelet[2784]: I0123 05:39:20.198486 2784 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 05:39:20.207772 kubelet[2784]: I0123 05:39:20.207710 2784 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 05:39:20.207925 kubelet[2784]: I0123 05:39:20.207820 2784 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 05:39:20.409708 kubelet[2784]: E0123 05:39:20.409205 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:20.409708 kubelet[2784]: E0123 05:39:20.409500 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:20.410457 kubelet[2784]: E0123 05:39:20.410318 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:20.946485 kubelet[2784]: I0123 05:39:20.946402 2784 apiserver.go:52] "Watching apiserver" Jan 23 05:39:20.965707 kubelet[2784]: I0123 05:39:20.965654 2784 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 05:39:21.009777 kubelet[2784]: I0123 05:39:21.009713 2784 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:21.009777 kubelet[2784]: E0123 05:39:21.009766 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:21.010119 kubelet[2784]: E0123 05:39:21.009725 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:21.031353 kubelet[2784]: E0123 05:39:21.031171 2784 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 05:39:21.032724 kubelet[2784]: E0123 05:39:21.032703 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:21.034293 kubelet[2784]: I0123 05:39:21.033956 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.033882388 podStartE2EDuration="3.033882388s" podCreationTimestamp="2026-01-23 05:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 05:39:21.03249989 +0000 UTC m=+1.172093111" watchObservedRunningTime="2026-01-23 05:39:21.033882388 +0000 UTC m=+1.173475599" Jan 23 05:39:21.066302 kubelet[2784]: I0123 05:39:21.066235 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.066218409 podStartE2EDuration="2.066218409s" podCreationTimestamp="2026-01-23 05:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 05:39:21.066129086 +0000 UTC m=+1.205722317" watchObservedRunningTime="2026-01-23 05:39:21.066218409 +0000 UTC m=+1.205811610" Jan 23 05:39:21.093968 kubelet[2784]: I0123 05:39:21.093859 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.093834883 podStartE2EDuration="1.093834883s" podCreationTimestamp="2026-01-23 05:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 05:39:21.079710134 +0000 UTC m=+1.219303344" watchObservedRunningTime="2026-01-23 05:39:21.093834883 +0000 UTC m=+1.233428094" Jan 23 05:39:22.011151 kubelet[2784]: E0123 05:39:22.011043 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:22.011735 kubelet[2784]: E0123 05:39:22.011216 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:22.528778 update_engine[1572]: I20260123 05:39:22.528649 1572 update_attempter.cc:509] Updating boot flags... Jan 23 05:39:23.013172 kubelet[2784]: E0123 05:39:23.013096 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:23.749793 kubelet[2784]: E0123 05:39:23.749570 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:24.015427 kubelet[2784]: E0123 05:39:24.015108 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:24.159401 kubelet[2784]: I0123 05:39:24.159346 2784 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 05:39:24.159906 containerd[1597]: time="2026-01-23T05:39:24.159869361Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 05:39:24.160294 kubelet[2784]: I0123 05:39:24.160168 2784 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 05:39:25.017177 kubelet[2784]: E0123 05:39:25.017107 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:25.156153 systemd[1]: Created slice kubepods-besteffort-podbd86873d_5fba_48f6_8f67_1fdf654f6282.slice - libcontainer container kubepods-besteffort-podbd86873d_5fba_48f6_8f67_1fdf654f6282.slice. Jan 23 05:39:25.212665 kubelet[2784]: I0123 05:39:25.212574 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd86873d-5fba-48f6-8f67-1fdf654f6282-xtables-lock\") pod \"kube-proxy-kr7dn\" (UID: \"bd86873d-5fba-48f6-8f67-1fdf654f6282\") " pod="kube-system/kube-proxy-kr7dn" Jan 23 05:39:25.212840 kubelet[2784]: I0123 05:39:25.212686 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd86873d-5fba-48f6-8f67-1fdf654f6282-lib-modules\") pod \"kube-proxy-kr7dn\" (UID: \"bd86873d-5fba-48f6-8f67-1fdf654f6282\") " pod="kube-system/kube-proxy-kr7dn" Jan 23 05:39:25.212840 kubelet[2784]: I0123 05:39:25.212723 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccbt\" (UniqueName: \"kubernetes.io/projected/bd86873d-5fba-48f6-8f67-1fdf654f6282-kube-api-access-rccbt\") pod \"kube-proxy-kr7dn\" (UID: \"bd86873d-5fba-48f6-8f67-1fdf654f6282\") " pod="kube-system/kube-proxy-kr7dn" Jan 23 05:39:25.212840 kubelet[2784]: I0123 05:39:25.212757 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd86873d-5fba-48f6-8f67-1fdf654f6282-kube-proxy\") pod \"kube-proxy-kr7dn\" (UID: \"bd86873d-5fba-48f6-8f67-1fdf654f6282\") " pod="kube-system/kube-proxy-kr7dn" Jan 23 05:39:25.284038 systemd[1]: Created slice kubepods-besteffort-pod8dd514c9_c136_4a72_92ad_a6294c9325fc.slice - libcontainer container kubepods-besteffort-pod8dd514c9_c136_4a72_92ad_a6294c9325fc.slice. Jan 23 05:39:25.313808 kubelet[2784]: I0123 05:39:25.313762 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrcv\" (UniqueName: \"kubernetes.io/projected/8dd514c9-c136-4a72-92ad-a6294c9325fc-kube-api-access-5xrcv\") pod \"tigera-operator-7dcd859c48-6wq2b\" (UID: \"8dd514c9-c136-4a72-92ad-a6294c9325fc\") " pod="tigera-operator/tigera-operator-7dcd859c48-6wq2b" Jan 23 05:39:25.313968 kubelet[2784]: I0123 05:39:25.313825 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8dd514c9-c136-4a72-92ad-a6294c9325fc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6wq2b\" (UID: \"8dd514c9-c136-4a72-92ad-a6294c9325fc\") " pod="tigera-operator/tigera-operator-7dcd859c48-6wq2b" Jan 23 05:39:25.468089 kubelet[2784]: E0123 05:39:25.467978 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:25.468934 containerd[1597]: time="2026-01-23T05:39:25.468865725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr7dn,Uid:bd86873d-5fba-48f6-8f67-1fdf654f6282,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:25.516651 containerd[1597]: time="2026-01-23T05:39:25.516567051Z" level=info msg="connecting to shim e5dd3c906f36940a195ea3bc468adb9d0ba730be03dedf6dde6db288cd514734" address="unix:///run/containerd/s/d44d2f312279b305641b58231cc2d69a7f7ab4b19b389fcb9a19d9112c897c9f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:25.574357 systemd[1]: Started cri-containerd-e5dd3c906f36940a195ea3bc468adb9d0ba730be03dedf6dde6db288cd514734.scope - libcontainer container e5dd3c906f36940a195ea3bc468adb9d0ba730be03dedf6dde6db288cd514734. Jan 23 05:39:25.590512 containerd[1597]: time="2026-01-23T05:39:25.590403699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6wq2b,Uid:8dd514c9-c136-4a72-92ad-a6294c9325fc,Namespace:tigera-operator,Attempt:0,}" Jan 23 05:39:25.594000 audit: BPF prog-id=133 op=LOAD Jan 23 05:39:25.598090 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 05:39:25.598159 kernel: audit: type=1334 audit(1769146765.594:432): prog-id=133 op=LOAD Jan 23 05:39:25.595000 audit: BPF prog-id=134 op=LOAD Jan 23 05:39:25.605318 kernel: audit: type=1334 audit(1769146765.595:433): prog-id=134 op=LOAD Jan 23 05:39:25.605420 kernel: audit: type=1300 audit(1769146765.595:433): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.627703 kernel: audit: type=1327 audit(1769146765.595:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.627809 kernel: audit: type=1334 audit(1769146765.595:434): prog-id=134 op=UNLOAD Jan 23 05:39:25.595000 audit: BPF prog-id=134 op=UNLOAD Jan 23 05:39:25.631114 kernel: audit: type=1300 audit(1769146765.595:434): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.643317 containerd[1597]: time="2026-01-23T05:39:25.643278734Z" level=info msg="connecting to shim f1e9e2c231905559a78f6ecb0e2a2980c7fa7d532b30c89fa93f50f17c954571" address="unix:///run/containerd/s/59c97fb8d1d658a42901078c8077b6c11d74a0562fe923a99f592244189efd24" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:25.652183 kernel: audit: type=1327 audit(1769146765.595:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.652261 kernel: audit: type=1334 audit(1769146765.595:435): prog-id=135 op=LOAD Jan 23 05:39:25.595000 audit: BPF prog-id=135 op=LOAD Jan 23 05:39:25.655328 kernel: audit: type=1300 audit(1769146765.595:435): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.657494 containerd[1597]: time="2026-01-23T05:39:25.657469196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr7dn,Uid:bd86873d-5fba-48f6-8f67-1fdf654f6282,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5dd3c906f36940a195ea3bc468adb9d0ba730be03dedf6dde6db288cd514734\"" Jan 23 05:39:25.659227 kubelet[2784]: E0123 05:39:25.659205 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:25.662258 containerd[1597]: time="2026-01-23T05:39:25.661767377Z" level=info msg="CreateContainer within sandbox \"e5dd3c906f36940a195ea3bc468adb9d0ba730be03dedf6dde6db288cd514734\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.595000 audit: BPF prog-id=136 op=LOAD Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.681131 kernel: audit: type=1327 audit(1769146765.595:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.595000 audit: BPF prog-id=136 op=UNLOAD Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.595000 audit: BPF prog-id=135 op=UNLOAD Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.595000 audit: BPF prog-id=137 op=LOAD Jan 23 05:39:25.595000 audit[2875]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2863 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535646433633930366633363934306131393565613362633436386164 Jan 23 05:39:25.683038 containerd[1597]: time="2026-01-23T05:39:25.682967343Z" level=info msg="Container 437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:25.689396 systemd[1]: Started cri-containerd-f1e9e2c231905559a78f6ecb0e2a2980c7fa7d532b30c89fa93f50f17c954571.scope - libcontainer container f1e9e2c231905559a78f6ecb0e2a2980c7fa7d532b30c89fa93f50f17c954571. Jan 23 05:39:25.703000 audit: BPF prog-id=138 op=LOAD Jan 23 05:39:25.704000 audit: BPF prog-id=139 op=LOAD Jan 23 05:39:25.704000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.704000 audit: BPF prog-id=139 op=UNLOAD Jan 23 05:39:25.704000 audit[2920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.705000 audit: BPF prog-id=140 op=LOAD Jan 23 05:39:25.705000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.705000 audit: BPF prog-id=141 op=LOAD Jan 23 05:39:25.705000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.706000 audit: BPF prog-id=141 op=UNLOAD Jan 23 05:39:25.706000 audit[2920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.706000 audit: BPF prog-id=140 op=UNLOAD Jan 23 05:39:25.706000 audit[2920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.706000 audit: BPF prog-id=142 op=LOAD Jan 23 05:39:25.706000 audit[2920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2909 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653965326332333139303535353961373866366563623065326132 Jan 23 05:39:25.713495 containerd[1597]: time="2026-01-23T05:39:25.713361746Z" level=info msg="CreateContainer within sandbox \"e5dd3c906f36940a195ea3bc468adb9d0ba730be03dedf6dde6db288cd514734\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1\"" Jan 23 05:39:25.714352 containerd[1597]: time="2026-01-23T05:39:25.714275122Z" level=info msg="StartContainer for \"437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1\"" Jan 23 05:39:25.718432 containerd[1597]: time="2026-01-23T05:39:25.718379582Z" level=info msg="connecting to shim 437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1" address="unix:///run/containerd/s/d44d2f312279b305641b58231cc2d69a7f7ab4b19b389fcb9a19d9112c897c9f" protocol=ttrpc version=3 Jan 23 05:39:25.750326 systemd[1]: Started cri-containerd-437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1.scope - libcontainer container 437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1. Jan 23 05:39:25.758780 containerd[1597]: time="2026-01-23T05:39:25.758701123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6wq2b,Uid:8dd514c9-c136-4a72-92ad-a6294c9325fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f1e9e2c231905559a78f6ecb0e2a2980c7fa7d532b30c89fa93f50f17c954571\"" Jan 23 05:39:25.762505 containerd[1597]: time="2026-01-23T05:39:25.762377687Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 05:39:25.821000 audit: BPF prog-id=143 op=LOAD Jan 23 05:39:25.821000 audit[2940]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2863 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373837316561333133313761616436653338643564613466366563 Jan 23 05:39:25.821000 audit: BPF prog-id=144 op=LOAD Jan 23 05:39:25.821000 audit[2940]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2863 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373837316561333133313761616436653338643564613466366563 Jan 23 05:39:25.821000 audit: BPF prog-id=144 op=UNLOAD Jan 23 05:39:25.821000 audit[2940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2863 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373837316561333133313761616436653338643564613466366563 Jan 23 05:39:25.821000 audit: BPF prog-id=143 op=UNLOAD Jan 23 05:39:25.821000 audit[2940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2863 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373837316561333133313761616436653338643564613466366563 Jan 23 05:39:25.821000 audit: BPF prog-id=145 op=LOAD Jan 23 05:39:25.821000 audit[2940]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2863 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:25.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433373837316561333133313761616436653338643564613466366563 Jan 23 05:39:25.846141 containerd[1597]: time="2026-01-23T05:39:25.845800060Z" level=info msg="StartContainer for \"437871ea31317aad6e38d5da4f6ecd61093b5b4573033a9881069e4fb51571c1\" returns successfully" Jan 23 05:39:26.021649 kubelet[2784]: E0123 05:39:26.021520 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:26.087000 audit[3012]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.087000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffed878210 a2=0 a3=7fffed8781fc items=0 ppid=2961 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 05:39:26.089000 audit[3013]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.089000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda54ab380 a2=0 a3=7ffda54ab36c items=0 ppid=2961 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 05:39:26.090000 audit[3014]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.090000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecb212990 a2=0 a3=7ffecb21297c items=0 ppid=2961 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 05:39:26.092000 audit[3015]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.092000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2f028630 a2=0 a3=7ffc2f02861c items=0 ppid=2961 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 05:39:26.093000 audit[3016]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.093000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01993180 a2=0 a3=7ffe0199316c items=0 ppid=2961 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 05:39:26.097000 audit[3018]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.097000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf0bfb4f0 a2=0 a3=7ffdf0bfb4dc items=0 ppid=2961 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 05:39:26.194000 audit[3019]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.194000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcd95c1cf0 a2=0 a3=7ffcd95c1cdc items=0 ppid=2961 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 05:39:26.199000 audit[3021]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.199000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe51c34a00 a2=0 a3=7ffe51c349ec items=0 ppid=2961 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 05:39:26.206000 audit[3024]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.206000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffa186c740 a2=0 a3=7fffa186c72c items=0 ppid=2961 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 05:39:26.209000 audit[3025]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.209000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4d851fe0 a2=0 a3=7ffc4d851fcc items=0 ppid=2961 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 05:39:26.213000 audit[3027]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.213000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd14388f30 a2=0 a3=7ffd14388f1c items=0 ppid=2961 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 05:39:26.216000 audit[3028]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.216000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7b9a9a80 a2=0 a3=7ffc7b9a9a6c items=0 ppid=2961 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 05:39:26.221000 audit[3030]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.221000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffffc973300 a2=0 a3=7ffffc9732ec items=0 ppid=2961 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 05:39:26.229000 audit[3033]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.229000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd13ddb9d0 a2=0 a3=7ffd13ddb9bc items=0 ppid=2961 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 05:39:26.232000 audit[3034]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.232000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde8214310 a2=0 a3=7ffde82142fc items=0 ppid=2961 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 05:39:26.238000 audit[3036]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.238000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd08d97ec0 a2=0 a3=7ffd08d97eac items=0 ppid=2961 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 05:39:26.240000 audit[3037]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.240000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7a85fb10 a2=0 a3=7ffd7a85fafc items=0 ppid=2961 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 05:39:26.244000 audit[3039]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.244000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd467bdbc0 a2=0 a3=7ffd467bdbac items=0 ppid=2961 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.244000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 05:39:26.251000 audit[3042]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.251000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdeee06bc0 a2=0 a3=7ffdeee06bac items=0 ppid=2961 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 05:39:26.258000 audit[3045]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.258000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe21182240 a2=0 a3=7ffe2118222c items=0 ppid=2961 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 05:39:26.261000 audit[3046]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.261000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd15987280 a2=0 a3=7ffd1598726c items=0 ppid=2961 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 05:39:26.267000 audit[3048]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.267000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd0b065140 a2=0 a3=7ffd0b06512c items=0 ppid=2961 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 05:39:26.275000 audit[3051]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.275000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2ac288b0 a2=0 a3=7ffe2ac2889c items=0 ppid=2961 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 05:39:26.277000 audit[3052]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.277000 audit[3052]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7bd7b870 a2=0 a3=7ffc7bd7b85c items=0 ppid=2961 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 05:39:26.283000 audit[3054]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 05:39:26.283000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe24c4ab30 a2=0 a3=7ffe24c4ab1c items=0 ppid=2961 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 05:39:26.316000 audit[3060]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:26.316000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff69f38180 a2=0 a3=7fff69f3816c items=0 ppid=2961 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:26.329000 audit[3060]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:26.329000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff69f38180 a2=0 a3=7fff69f3816c items=0 ppid=2961 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:26.331000 audit[3065]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.331000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4161f2e0 a2=0 a3=7ffe4161f2cc items=0 ppid=2961 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 05:39:26.337000 audit[3067]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.337000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffefd4a0750 a2=0 a3=7ffefd4a073c items=0 ppid=2961 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 05:39:26.345000 audit[3070]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.345000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd7c91b3b0 a2=0 a3=7ffd7c91b39c items=0 ppid=2961 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 05:39:26.347000 audit[3071]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.347000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc5c696a0 a2=0 a3=7fffc5c6968c items=0 ppid=2961 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 05:39:26.352000 audit[3073]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.352000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe350dc990 a2=0 a3=7ffe350dc97c items=0 ppid=2961 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 05:39:26.354000 audit[3074]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.354000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff14a1b560 a2=0 a3=7fff14a1b54c items=0 ppid=2961 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 05:39:26.360000 audit[3076]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.360000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe9fd81830 a2=0 a3=7ffe9fd8181c items=0 ppid=2961 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 05:39:26.367000 audit[3079]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.367000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffce4500690 a2=0 a3=7ffce450067c items=0 ppid=2961 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 05:39:26.369000 audit[3080]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.369000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2cabd900 a2=0 a3=7fff2cabd8ec items=0 ppid=2961 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 05:39:26.373000 audit[3082]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.373000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc62bad100 a2=0 a3=7ffc62bad0ec items=0 ppid=2961 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 05:39:26.375000 audit[3083]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.375000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfa13d3a0 a2=0 a3=7ffdfa13d38c items=0 ppid=2961 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 05:39:26.380000 audit[3085]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.380000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd0c8cd20 a2=0 a3=7fffd0c8cd0c items=0 ppid=2961 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 05:39:26.386000 audit[3088]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.386000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1a4ec590 a2=0 a3=7ffe1a4ec57c items=0 ppid=2961 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 05:39:26.393000 audit[3091]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.393000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdab806b20 a2=0 a3=7ffdab806b0c items=0 ppid=2961 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 05:39:26.396000 audit[3092]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.396000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb10f3b70 a2=0 a3=7ffeb10f3b5c items=0 ppid=2961 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 05:39:26.400000 audit[3094]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.400000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb1596ec0 a2=0 a3=7ffdb1596eac items=0 ppid=2961 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 05:39:26.407000 audit[3097]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.407000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdde8df950 a2=0 a3=7ffdde8df93c items=0 ppid=2961 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 05:39:26.408000 audit[3098]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.408000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4f3caca0 a2=0 a3=7ffd4f3cac8c items=0 ppid=2961 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 05:39:26.413000 audit[3100]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.413000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd2db1bb80 a2=0 a3=7ffd2db1bb6c items=0 ppid=2961 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 05:39:26.415000 audit[3101]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.415000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6d1f0110 a2=0 a3=7ffd6d1f00fc items=0 ppid=2961 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 05:39:26.419000 audit[3103]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.419000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe3b6a7390 a2=0 a3=7ffe3b6a737c items=0 ppid=2961 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 05:39:26.425000 audit[3106]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 05:39:26.425000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd6177e9d0 a2=0 a3=7ffd6177e9bc items=0 ppid=2961 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 05:39:26.431000 audit[3108]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 05:39:26.431000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe7e013e30 a2=0 a3=7ffe7e013e1c items=0 ppid=2961 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.431000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:26.432000 audit[3108]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 05:39:26.432000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe7e013e30 a2=0 a3=7ffe7e013e1c items=0 ppid=2961 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:26.432000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:26.618342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218550046.mount: Deactivated successfully. Jan 23 05:39:27.397129 containerd[1597]: time="2026-01-23T05:39:27.396970288Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:27.398413 containerd[1597]: time="2026-01-23T05:39:27.398343112Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 23 05:39:27.399718 containerd[1597]: time="2026-01-23T05:39:27.399600867Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:27.402044 containerd[1597]: time="2026-01-23T05:39:27.401962123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:27.402862 containerd[1597]: time="2026-01-23T05:39:27.402790806Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.640293386s" Jan 23 05:39:27.402862 containerd[1597]: time="2026-01-23T05:39:27.402849656Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 05:39:27.405771 containerd[1597]: time="2026-01-23T05:39:27.405738913Z" level=info msg="CreateContainer within sandbox \"f1e9e2c231905559a78f6ecb0e2a2980c7fa7d532b30c89fa93f50f17c954571\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 05:39:27.416142 containerd[1597]: time="2026-01-23T05:39:27.416011254Z" level=info msg="Container 7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:27.425645 containerd[1597]: time="2026-01-23T05:39:27.425512944Z" level=info msg="CreateContainer within sandbox \"f1e9e2c231905559a78f6ecb0e2a2980c7fa7d532b30c89fa93f50f17c954571\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938\"" Jan 23 05:39:27.426387 containerd[1597]: time="2026-01-23T05:39:27.426310733Z" level=info msg="StartContainer for \"7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938\"" Jan 23 05:39:27.427935 containerd[1597]: time="2026-01-23T05:39:27.427684086Z" level=info msg="connecting to shim 7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938" address="unix:///run/containerd/s/59c97fb8d1d658a42901078c8077b6c11d74a0562fe923a99f592244189efd24" protocol=ttrpc version=3 Jan 23 05:39:27.464309 systemd[1]: Started cri-containerd-7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938.scope - libcontainer container 7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938. Jan 23 05:39:27.481000 audit: BPF prog-id=146 op=LOAD Jan 23 05:39:27.481000 audit: BPF prog-id=147 op=LOAD Jan 23 05:39:27.481000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.482000 audit: BPF prog-id=147 op=UNLOAD Jan 23 05:39:27.482000 audit[3117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.482000 audit: BPF prog-id=148 op=LOAD Jan 23 05:39:27.482000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.482000 audit: BPF prog-id=149 op=LOAD Jan 23 05:39:27.482000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.482000 audit: BPF prog-id=149 op=UNLOAD Jan 23 05:39:27.482000 audit[3117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.482000 audit: BPF prog-id=148 op=UNLOAD Jan 23 05:39:27.482000 audit[3117]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.482000 audit: BPF prog-id=150 op=LOAD Jan 23 05:39:27.482000 audit[3117]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2909 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:27.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761356539356232643861363233633962356234306235623839326438 Jan 23 05:39:27.507290 containerd[1597]: time="2026-01-23T05:39:27.507225512Z" level=info msg="StartContainer for \"7a5e95b2d8a623c9b5b40b5b892d8a20be7a5d910e47b3e60259f26ebda15938\" returns successfully" Jan 23 05:39:28.037265 kubelet[2784]: I0123 05:39:28.037098 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kr7dn" podStartSLOduration=3.037036006 podStartE2EDuration="3.037036006s" podCreationTimestamp="2026-01-23 05:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 05:39:26.034335189 +0000 UTC m=+6.173928420" watchObservedRunningTime="2026-01-23 05:39:28.037036006 +0000 UTC m=+8.176629217" Jan 23 05:39:29.714529 kubelet[2784]: E0123 05:39:29.714496 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:29.732676 kubelet[2784]: I0123 05:39:29.732501 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6wq2b" podStartSLOduration=3.089990909 podStartE2EDuration="4.732481211s" podCreationTimestamp="2026-01-23 05:39:25 +0000 UTC" firstStartedPulling="2026-01-23 05:39:25.761388261 +0000 UTC m=+5.900981472" lastFinishedPulling="2026-01-23 05:39:27.403878562 +0000 UTC m=+7.543471774" observedRunningTime="2026-01-23 05:39:28.037248543 +0000 UTC m=+8.176841753" watchObservedRunningTime="2026-01-23 05:39:29.732481211 +0000 UTC m=+9.872074432" Jan 23 05:39:30.035274 kubelet[2784]: E0123 05:39:30.033513 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:31.096257 kubelet[2784]: E0123 05:39:31.096192 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:33.233014 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 05:39:33.233488 kernel: audit: type=1106 audit(1769146773.221:512): pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:39:33.221000 audit[1835]: USER_END pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:39:33.222863 sudo[1835]: pam_unix(sudo:session): session closed for user root Jan 23 05:39:33.222000 audit[1835]: CRED_DISP pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:39:33.242923 kernel: audit: type=1104 audit(1769146773.222:513): pid=1835 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 05:39:33.246094 sshd[1834]: Connection closed by 10.0.0.1 port 44264 Jan 23 05:39:33.245424 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Jan 23 05:39:33.251000 audit[1830]: USER_END pid=1830 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:39:33.263115 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:44264.service: Deactivated successfully. Jan 23 05:39:33.268406 kernel: audit: type=1106 audit(1769146773.251:514): pid=1830 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:39:33.252000 audit[1830]: CRED_DISP pid=1830 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:39:33.286517 kernel: audit: type=1104 audit(1769146773.252:515): pid=1830 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:39:33.286583 kernel: audit: type=1131 audit(1769146773.262:516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.10:22-10.0.0.1:44264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:33.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.10:22-10.0.0.1:44264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:39:33.278990 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 05:39:33.281812 systemd[1]: session-10.scope: Consumed 5.334s CPU time, 207.1M memory peak. Jan 23 05:39:33.292327 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Jan 23 05:39:33.300164 systemd-logind[1569]: Removed session 10. Jan 23 05:39:34.411000 audit[3210]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:34.421226 kernel: audit: type=1325 audit(1769146774.411:517): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:34.411000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd44fba050 a2=0 a3=7ffd44fba03c items=0 ppid=2961 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:34.448923 kernel: audit: type=1300 audit(1769146774.411:517): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd44fba050 a2=0 a3=7ffd44fba03c items=0 ppid=2961 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:34.449023 kernel: audit: type=1327 audit(1769146774.411:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:34.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:34.456667 kernel: audit: type=1325 audit(1769146774.422:518): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:34.422000 audit[3210]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:34.498994 kernel: audit: type=1300 audit(1769146774.422:518): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd44fba050 a2=0 a3=0 items=0 ppid=2961 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:34.422000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd44fba050 a2=0 a3=0 items=0 ppid=2961 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:34.422000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:34.584000 audit[3212]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:34.584000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd3834f060 a2=0 a3=7ffd3834f04c items=0 ppid=2961 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:34.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:34.590000 audit[3212]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:34.590000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3834f060 a2=0 a3=0 items=0 ppid=2961 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:34.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:36.639000 audit[3214]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:36.639000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe89333f20 a2=0 a3=7ffe89333f0c items=0 ppid=2961 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:36.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:36.645000 audit[3214]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:36.645000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe89333f20 a2=0 a3=0 items=0 ppid=2961 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:36.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:36.664000 audit[3216]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:36.664000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff9d52d8c0 a2=0 a3=7fff9d52d8ac items=0 ppid=2961 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:36.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:36.675000 audit[3216]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:36.675000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9d52d8c0 a2=0 a3=0 items=0 ppid=2961 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:36.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:37.692000 audit[3218]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:37.692000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffebbb486f0 a2=0 a3=7ffebbb486dc items=0 ppid=2961 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:37.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:37.697000 audit[3218]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:37.697000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffebbb486f0 a2=0 a3=0 items=0 ppid=2961 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:37.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:38.273834 systemd[1]: Created slice kubepods-besteffort-pod70696a92_1dee_4a7a_a3d8_f2d2d350f1e8.slice - libcontainer container kubepods-besteffort-pod70696a92_1dee_4a7a_a3d8_f2d2d350f1e8.slice. Jan 23 05:39:38.365613 kubelet[2784]: I0123 05:39:38.365515 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/70696a92-1dee-4a7a-a3d8-f2d2d350f1e8-typha-certs\") pod \"calico-typha-6fcf599d59-js5jr\" (UID: \"70696a92-1dee-4a7a-a3d8-f2d2d350f1e8\") " pod="calico-system/calico-typha-6fcf599d59-js5jr" Jan 23 05:39:38.365613 kubelet[2784]: I0123 05:39:38.365576 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbk2\" (UniqueName: \"kubernetes.io/projected/70696a92-1dee-4a7a-a3d8-f2d2d350f1e8-kube-api-access-7wbk2\") pod \"calico-typha-6fcf599d59-js5jr\" (UID: \"70696a92-1dee-4a7a-a3d8-f2d2d350f1e8\") " pod="calico-system/calico-typha-6fcf599d59-js5jr" Jan 23 05:39:38.365613 kubelet[2784]: I0123 05:39:38.365597 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70696a92-1dee-4a7a-a3d8-f2d2d350f1e8-tigera-ca-bundle\") pod \"calico-typha-6fcf599d59-js5jr\" (UID: \"70696a92-1dee-4a7a-a3d8-f2d2d350f1e8\") " pod="calico-system/calico-typha-6fcf599d59-js5jr" Jan 23 05:39:38.453689 systemd[1]: Created slice kubepods-besteffort-podffaa983a_69de_48a2_bd5a_61360a164dd9.slice - libcontainer container kubepods-besteffort-podffaa983a_69de_48a2_bd5a_61360a164dd9.slice. Jan 23 05:39:38.466884 kubelet[2784]: I0123 05:39:38.466748 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-cni-log-dir\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.466884 kubelet[2784]: I0123 05:39:38.466796 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-lib-modules\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.466884 kubelet[2784]: I0123 05:39:38.466813 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-cni-net-dir\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.466884 kubelet[2784]: I0123 05:39:38.466826 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-flexvol-driver-host\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.466884 kubelet[2784]: I0123 05:39:38.466842 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ffaa983a-69de-48a2-bd5a-61360a164dd9-node-certs\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467324 kubelet[2784]: I0123 05:39:38.466854 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffaa983a-69de-48a2-bd5a-61360a164dd9-tigera-ca-bundle\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467324 kubelet[2784]: I0123 05:39:38.466867 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-var-run-calico\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467324 kubelet[2784]: I0123 05:39:38.466894 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-cni-bin-dir\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467324 kubelet[2784]: I0123 05:39:38.466906 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-xtables-lock\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467324 kubelet[2784]: I0123 05:39:38.466918 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbw8f\" (UniqueName: \"kubernetes.io/projected/ffaa983a-69de-48a2-bd5a-61360a164dd9-kube-api-access-mbw8f\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467518 kubelet[2784]: I0123 05:39:38.466940 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-var-lib-calico\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.467518 kubelet[2784]: I0123 05:39:38.466974 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ffaa983a-69de-48a2-bd5a-61360a164dd9-policysync\") pod \"calico-node-nc4tf\" (UID: \"ffaa983a-69de-48a2-bd5a-61360a164dd9\") " pod="calico-system/calico-node-nc4tf" Jan 23 05:39:38.576217 kubelet[2784]: E0123 05:39:38.575611 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.576217 kubelet[2784]: W0123 05:39:38.575683 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.576217 kubelet[2784]: E0123 05:39:38.575783 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.578275 kubelet[2784]: E0123 05:39:38.578164 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:38.581279 containerd[1597]: time="2026-01-23T05:39:38.581168994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcf599d59-js5jr,Uid:70696a92-1dee-4a7a-a3d8-f2d2d350f1e8,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:38.588150 kubelet[2784]: E0123 05:39:38.587916 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.588150 kubelet[2784]: W0123 05:39:38.588100 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.588384 kubelet[2784]: E0123 05:39:38.588361 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.614481 containerd[1597]: time="2026-01-23T05:39:38.614270333Z" level=info msg="connecting to shim efe1c50d665748ae1f0bc341804663db60fe7b5664ae9db056f0e08a3797713e" address="unix:///run/containerd/s/34bbe453dafe62eb70b20b772f7d9849116d3c8a94a46bc78b993744fcf88688" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:38.647941 kubelet[2784]: E0123 05:39:38.647806 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:39:38.660347 systemd[1]: Started cri-containerd-efe1c50d665748ae1f0bc341804663db60fe7b5664ae9db056f0e08a3797713e.scope - libcontainer container efe1c50d665748ae1f0bc341804663db60fe7b5664ae9db056f0e08a3797713e. Jan 23 05:39:38.663329 kubelet[2784]: E0123 05:39:38.663256 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.663329 kubelet[2784]: W0123 05:39:38.663290 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.663329 kubelet[2784]: E0123 05:39:38.663311 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.665499 kubelet[2784]: E0123 05:39:38.665453 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.665499 kubelet[2784]: W0123 05:39:38.665488 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.665816 kubelet[2784]: E0123 05:39:38.665508 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.666681 kubelet[2784]: E0123 05:39:38.666165 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.666681 kubelet[2784]: W0123 05:39:38.666355 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.666681 kubelet[2784]: E0123 05:39:38.666374 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.667771 kubelet[2784]: E0123 05:39:38.667463 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.668236 kubelet[2784]: W0123 05:39:38.668132 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.668541 kubelet[2784]: E0123 05:39:38.668415 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.669757 kubelet[2784]: E0123 05:39:38.669233 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.669757 kubelet[2784]: W0123 05:39:38.669246 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.669757 kubelet[2784]: E0123 05:39:38.669259 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.669899 kubelet[2784]: E0123 05:39:38.669878 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.669899 kubelet[2784]: W0123 05:39:38.669893 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.669949 kubelet[2784]: E0123 05:39:38.669903 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.671503 kubelet[2784]: E0123 05:39:38.671129 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.671881 kubelet[2784]: W0123 05:39:38.671865 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.671961 kubelet[2784]: E0123 05:39:38.671946 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.673218 kubelet[2784]: E0123 05:39:38.672907 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.673218 kubelet[2784]: W0123 05:39:38.672922 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.673218 kubelet[2784]: E0123 05:39:38.672932 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.673596 kubelet[2784]: E0123 05:39:38.673578 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.674456 kubelet[2784]: W0123 05:39:38.673914 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.675082 kubelet[2784]: E0123 05:39:38.674914 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.676489 kubelet[2784]: E0123 05:39:38.675522 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.676489 kubelet[2784]: W0123 05:39:38.675538 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.676489 kubelet[2784]: E0123 05:39:38.675618 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.676489 kubelet[2784]: E0123 05:39:38.676441 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.676630 kubelet[2784]: W0123 05:39:38.676523 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.676630 kubelet[2784]: E0123 05:39:38.676538 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.677395 kubelet[2784]: E0123 05:39:38.677302 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.677395 kubelet[2784]: W0123 05:39:38.677389 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.677539 kubelet[2784]: E0123 05:39:38.677407 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.678359 kubelet[2784]: E0123 05:39:38.678281 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.678410 kubelet[2784]: W0123 05:39:38.678383 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.678410 kubelet[2784]: E0123 05:39:38.678399 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.679111 kubelet[2784]: E0123 05:39:38.678872 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.679111 kubelet[2784]: W0123 05:39:38.678889 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.679111 kubelet[2784]: E0123 05:39:38.678902 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.679437 kubelet[2784]: E0123 05:39:38.679388 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.679535 kubelet[2784]: W0123 05:39:38.679497 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.679567 kubelet[2784]: E0123 05:39:38.679532 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.680342 kubelet[2784]: E0123 05:39:38.680304 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.680443 kubelet[2784]: W0123 05:39:38.680406 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.681107 kubelet[2784]: E0123 05:39:38.680439 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.681107 kubelet[2784]: E0123 05:39:38.680844 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.681107 kubelet[2784]: W0123 05:39:38.680865 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.681107 kubelet[2784]: E0123 05:39:38.680876 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.681231 kubelet[2784]: E0123 05:39:38.681114 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.681231 kubelet[2784]: W0123 05:39:38.681123 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.681231 kubelet[2784]: E0123 05:39:38.681131 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.681357 kubelet[2784]: E0123 05:39:38.681321 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.681357 kubelet[2784]: W0123 05:39:38.681342 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.681357 kubelet[2784]: E0123 05:39:38.681350 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.681629 kubelet[2784]: E0123 05:39:38.681504 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.681629 kubelet[2784]: W0123 05:39:38.681513 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.681629 kubelet[2784]: E0123 05:39:38.681520 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.682947 kubelet[2784]: E0123 05:39:38.682916 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.682947 kubelet[2784]: W0123 05:39:38.682935 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.682947 kubelet[2784]: E0123 05:39:38.682945 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.683168 kubelet[2784]: I0123 05:39:38.682976 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e85a0d3-5d32-4d8a-b91c-0a641948fd22-socket-dir\") pod \"csi-node-driver-rpddq\" (UID: \"5e85a0d3-5d32-4d8a-b91c-0a641948fd22\") " pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:38.683697 kubelet[2784]: E0123 05:39:38.683606 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.683697 kubelet[2784]: W0123 05:39:38.683619 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.684136 kubelet[2784]: E0123 05:39:38.684111 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.684168 kubelet[2784]: I0123 05:39:38.684142 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e85a0d3-5d32-4d8a-b91c-0a641948fd22-kubelet-dir\") pod \"csi-node-driver-rpddq\" (UID: \"5e85a0d3-5d32-4d8a-b91c-0a641948fd22\") " pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:38.684474 kubelet[2784]: E0123 05:39:38.684363 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.684474 kubelet[2784]: W0123 05:39:38.684377 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.684534 kubelet[2784]: E0123 05:39:38.684486 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.684604 kubelet[2784]: E0123 05:39:38.684593 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.684690 kubelet[2784]: W0123 05:39:38.684676 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.684799 kubelet[2784]: I0123 05:39:38.684519 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e85a0d3-5d32-4d8a-b91c-0a641948fd22-varrun\") pod \"csi-node-driver-rpddq\" (UID: \"5e85a0d3-5d32-4d8a-b91c-0a641948fd22\") " pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:38.684933 kubelet[2784]: E0123 05:39:38.684866 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.689825 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 23 05:39:38.689864 kernel: audit: type=1334 audit(1769146778.684:527): prog-id=151 op=LOAD Jan 23 05:39:38.684000 audit: BPF prog-id=151 op=LOAD Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.688088 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.689969 kubelet[2784]: W0123 05:39:38.688098 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.688124 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.688733 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.689969 kubelet[2784]: W0123 05:39:38.688741 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.688765 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.689093 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.689969 kubelet[2784]: W0123 05:39:38.689101 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.689109 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.689969 kubelet[2784]: E0123 05:39:38.689446 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.690248 kubelet[2784]: W0123 05:39:38.689457 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.690248 kubelet[2784]: E0123 05:39:38.689465 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.690248 kubelet[2784]: E0123 05:39:38.689882 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.690248 kubelet[2784]: W0123 05:39:38.689894 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.690248 kubelet[2784]: E0123 05:39:38.689908 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.685000 audit: BPF prog-id=152 op=LOAD Jan 23 05:39:38.691159 kubelet[2784]: E0123 05:39:38.690746 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.691159 kubelet[2784]: W0123 05:39:38.690758 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.691159 kubelet[2784]: E0123 05:39:38.690770 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.691159 kubelet[2784]: I0123 05:39:38.690858 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e85a0d3-5d32-4d8a-b91c-0a641948fd22-registration-dir\") pod \"csi-node-driver-rpddq\" (UID: \"5e85a0d3-5d32-4d8a-b91c-0a641948fd22\") " pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:38.691514 kubelet[2784]: E0123 05:39:38.691471 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.691514 kubelet[2784]: W0123 05:39:38.691500 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.691756 kubelet[2784]: E0123 05:39:38.691728 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.691974 kubelet[2784]: I0123 05:39:38.691955 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpdm\" (UniqueName: \"kubernetes.io/projected/5e85a0d3-5d32-4d8a-b91c-0a641948fd22-kube-api-access-8gpdm\") pod \"csi-node-driver-rpddq\" (UID: \"5e85a0d3-5d32-4d8a-b91c-0a641948fd22\") " pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:38.692169 kubelet[2784]: E0123 05:39:38.692146 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.692195 kubelet[2784]: W0123 05:39:38.692169 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.692226 kubelet[2784]: E0123 05:39:38.692194 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.692634 kubelet[2784]: E0123 05:39:38.692609 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.692692 kubelet[2784]: W0123 05:39:38.692633 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.692744 kubelet[2784]: E0123 05:39:38.692719 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.693131 kubelet[2784]: E0123 05:39:38.693037 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.693182 kubelet[2784]: W0123 05:39:38.693157 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.693182 kubelet[2784]: E0123 05:39:38.693173 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.693594 kubelet[2784]: E0123 05:39:38.693544 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.693594 kubelet[2784]: W0123 05:39:38.693574 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.693594 kubelet[2784]: E0123 05:39:38.693585 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.701237 kernel: audit: type=1334 audit(1769146778.685:528): prog-id=152 op=LOAD Jan 23 05:39:38.701286 kernel: audit: type=1300 audit(1769146778.685:528): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.701363 kernel: audit: type=1327 audit(1769146778.685:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.712927 kernel: audit: type=1334 audit(1769146778.685:529): prog-id=152 op=UNLOAD Jan 23 05:39:38.685000 audit: BPF prog-id=152 op=UNLOAD Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.721168 kernel: audit: type=1300 audit(1769146778.685:529): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.722329 kernel: audit: type=1327 audit(1769146778.685:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: BPF prog-id=153 op=LOAD Jan 23 05:39:38.735862 kernel: audit: type=1334 audit(1769146778.685:530): prog-id=153 op=LOAD Jan 23 05:39:38.735912 kernel: audit: type=1300 audit(1769146778.685:530): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.758032 kernel: audit: type=1327 audit(1769146778.685:530): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: BPF prog-id=154 op=LOAD Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: BPF prog-id=154 op=UNLOAD Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: BPF prog-id=153 op=UNLOAD Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.685000 audit: BPF prog-id=155 op=LOAD Jan 23 05:39:38.685000 audit[3245]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3234 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566653163353064363635373438616531663062633334313830343636 Jan 23 05:39:38.717000 audit[3311]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:38.717000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcbb3f41f0 a2=0 a3=7ffcbb3f41dc items=0 ppid=2961 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:38.734000 audit[3311]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:38.734000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbb3f41f0 a2=0 a3=0 items=0 ppid=2961 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:38.761585 kubelet[2784]: E0123 05:39:38.760152 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:38.761725 containerd[1597]: time="2026-01-23T05:39:38.761514743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nc4tf,Uid:ffaa983a-69de-48a2-bd5a-61360a164dd9,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:38.779048 containerd[1597]: time="2026-01-23T05:39:38.778961168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcf599d59-js5jr,Uid:70696a92-1dee-4a7a-a3d8-f2d2d350f1e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"efe1c50d665748ae1f0bc341804663db60fe7b5664ae9db056f0e08a3797713e\"" Jan 23 05:39:38.786915 kubelet[2784]: E0123 05:39:38.786846 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:38.790493 containerd[1597]: time="2026-01-23T05:39:38.789558606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 05:39:38.795383 kubelet[2784]: E0123 05:39:38.795355 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.795619 kubelet[2784]: W0123 05:39:38.795597 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.795863 kubelet[2784]: E0123 05:39:38.795839 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.798288 kubelet[2784]: E0123 05:39:38.796634 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.798288 kubelet[2784]: W0123 05:39:38.798205 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.798616 kubelet[2784]: E0123 05:39:38.798445 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.799248 kubelet[2784]: E0123 05:39:38.799227 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.799907 kubelet[2784]: W0123 05:39:38.799798 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.801342 kubelet[2784]: E0123 05:39:38.800543 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.802634 kubelet[2784]: E0123 05:39:38.802565 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.802870 kubelet[2784]: W0123 05:39:38.802584 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.803246 kubelet[2784]: E0123 05:39:38.802817 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.803831 kubelet[2784]: E0123 05:39:38.803721 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.803900 kubelet[2784]: W0123 05:39:38.803813 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.806381 kubelet[2784]: E0123 05:39:38.806295 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.806381 kubelet[2784]: W0123 05:39:38.806332 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.806381 kubelet[2784]: E0123 05:39:38.806358 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.806749 kubelet[2784]: E0123 05:39:38.806705 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.807402 kubelet[2784]: E0123 05:39:38.807323 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.807402 kubelet[2784]: W0123 05:39:38.807353 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.807567 kubelet[2784]: E0123 05:39:38.807505 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.807957 kubelet[2784]: E0123 05:39:38.807914 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.807957 kubelet[2784]: W0123 05:39:38.807950 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.808109 kubelet[2784]: E0123 05:39:38.808017 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.808368 kubelet[2784]: E0123 05:39:38.808311 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.808368 kubelet[2784]: W0123 05:39:38.808346 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.808703 kubelet[2784]: E0123 05:39:38.808399 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.809793 kubelet[2784]: E0123 05:39:38.809576 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.809793 kubelet[2784]: W0123 05:39:38.809614 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.809793 kubelet[2784]: E0123 05:39:38.809783 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.810953 kubelet[2784]: E0123 05:39:38.809953 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.810953 kubelet[2784]: W0123 05:39:38.809966 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.810953 kubelet[2784]: E0123 05:39:38.810042 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.810953 kubelet[2784]: E0123 05:39:38.810429 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.810953 kubelet[2784]: W0123 05:39:38.810442 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.810953 kubelet[2784]: E0123 05:39:38.810578 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.811246 kubelet[2784]: E0123 05:39:38.810991 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.811246 kubelet[2784]: W0123 05:39:38.811003 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.811246 kubelet[2784]: E0123 05:39:38.811128 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.811526 kubelet[2784]: E0123 05:39:38.811463 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.811526 kubelet[2784]: W0123 05:39:38.811478 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.811616 kubelet[2784]: E0123 05:39:38.811600 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.812424 kubelet[2784]: E0123 05:39:38.812372 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.812424 kubelet[2784]: W0123 05:39:38.812402 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.812708 kubelet[2784]: E0123 05:39:38.812613 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.813309 kubelet[2784]: E0123 05:39:38.813259 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.813309 kubelet[2784]: W0123 05:39:38.813288 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.813396 kubelet[2784]: E0123 05:39:38.813354 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.813608 kubelet[2784]: E0123 05:39:38.813559 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.813608 kubelet[2784]: W0123 05:39:38.813593 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.813802 kubelet[2784]: E0123 05:39:38.813743 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.814123 kubelet[2784]: E0123 05:39:38.814102 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.814173 kubelet[2784]: W0123 05:39:38.814123 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.814367 kubelet[2784]: E0123 05:39:38.814261 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.814927 kubelet[2784]: E0123 05:39:38.814911 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.814927 kubelet[2784]: W0123 05:39:38.814927 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.815041 kubelet[2784]: E0123 05:39:38.815021 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.816106 kubelet[2784]: E0123 05:39:38.816032 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.816344 kubelet[2784]: W0123 05:39:38.816308 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.816626 kubelet[2784]: E0123 05:39:38.816485 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.817229 kubelet[2784]: E0123 05:39:38.817036 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.817328 kubelet[2784]: W0123 05:39:38.817271 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.817702 kubelet[2784]: E0123 05:39:38.817574 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.818476 kubelet[2784]: E0123 05:39:38.818428 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.819155 kubelet[2784]: W0123 05:39:38.819132 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.819601 kubelet[2784]: E0123 05:39:38.819491 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.820460 kubelet[2784]: E0123 05:39:38.820418 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.820702 kubelet[2784]: W0123 05:39:38.820612 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.821922 kubelet[2784]: E0123 05:39:38.821825 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.822536 kubelet[2784]: E0123 05:39:38.822504 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.822536 kubelet[2784]: W0123 05:39:38.822533 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.826334 kubelet[2784]: E0123 05:39:38.826164 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.826587 kubelet[2784]: E0123 05:39:38.826544 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.826587 kubelet[2784]: W0123 05:39:38.826578 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.826587 kubelet[2784]: E0123 05:39:38.826600 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.831954 containerd[1597]: time="2026-01-23T05:39:38.831581399Z" level=info msg="connecting to shim aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6" address="unix:///run/containerd/s/526fe53f6def618fad665ab3eb820a91fafb6bb9db886b86d41f4faa67ae2bad" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:38.849293 kubelet[2784]: E0123 05:39:38.849152 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:38.849293 kubelet[2784]: W0123 05:39:38.849237 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:38.849293 kubelet[2784]: E0123 05:39:38.849260 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:38.887351 systemd[1]: Started cri-containerd-aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6.scope - libcontainer container aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6. Jan 23 05:39:38.910000 audit: BPF prog-id=156 op=LOAD Jan 23 05:39:38.911000 audit: BPF prog-id=157 op=LOAD Jan 23 05:39:38.911000 audit[3362]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.911000 audit: BPF prog-id=157 op=UNLOAD Jan 23 05:39:38.911000 audit[3362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.911000 audit: BPF prog-id=158 op=LOAD Jan 23 05:39:38.911000 audit[3362]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.911000 audit: BPF prog-id=159 op=LOAD Jan 23 05:39:38.911000 audit[3362]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.912000 audit: BPF prog-id=159 op=UNLOAD Jan 23 05:39:38.912000 audit[3362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.912000 audit: BPF prog-id=158 op=UNLOAD Jan 23 05:39:38.912000 audit[3362]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.912000 audit: BPF prog-id=160 op=LOAD Jan 23 05:39:38.912000 audit[3362]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3352 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:38.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161303235626233393537333466396162343634653861636665326236 Jan 23 05:39:38.938684 containerd[1597]: time="2026-01-23T05:39:38.938598351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nc4tf,Uid:ffaa983a-69de-48a2-bd5a-61360a164dd9,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\"" Jan 23 05:39:38.939376 kubelet[2784]: E0123 05:39:38.939342 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:39.991840 kubelet[2784]: E0123 05:39:39.991766 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:39:40.090929 containerd[1597]: time="2026-01-23T05:39:40.090773542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:40.092692 containerd[1597]: time="2026-01-23T05:39:40.092610441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 23 05:39:40.095264 containerd[1597]: time="2026-01-23T05:39:40.094559372Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:40.100282 containerd[1597]: time="2026-01-23T05:39:40.100212082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:40.101460 containerd[1597]: time="2026-01-23T05:39:40.101353413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.311739464s" Jan 23 05:39:40.101460 containerd[1597]: time="2026-01-23T05:39:40.101407895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 05:39:40.105758 containerd[1597]: time="2026-01-23T05:39:40.104265490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 05:39:40.124737 containerd[1597]: time="2026-01-23T05:39:40.124668106Z" level=info msg="CreateContainer within sandbox \"efe1c50d665748ae1f0bc341804663db60fe7b5664ae9db056f0e08a3797713e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 05:39:40.199089 containerd[1597]: time="2026-01-23T05:39:40.198981791Z" level=info msg="Container 5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:40.213991 containerd[1597]: time="2026-01-23T05:39:40.213764764Z" level=info msg="CreateContainer within sandbox \"efe1c50d665748ae1f0bc341804663db60fe7b5664ae9db056f0e08a3797713e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82\"" Jan 23 05:39:40.214392 containerd[1597]: time="2026-01-23T05:39:40.214367600Z" level=info msg="StartContainer for \"5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82\"" Jan 23 05:39:40.215852 containerd[1597]: time="2026-01-23T05:39:40.215826163Z" level=info msg="connecting to shim 5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82" address="unix:///run/containerd/s/34bbe453dafe62eb70b20b772f7d9849116d3c8a94a46bc78b993744fcf88688" protocol=ttrpc version=3 Jan 23 05:39:40.253407 systemd[1]: Started cri-containerd-5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82.scope - libcontainer container 5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82. Jan 23 05:39:40.280000 audit: BPF prog-id=161 op=LOAD Jan 23 05:39:40.281000 audit: BPF prog-id=162 op=LOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.281000 audit: BPF prog-id=162 op=UNLOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.281000 audit: BPF prog-id=163 op=LOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.281000 audit: BPF prog-id=164 op=LOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.281000 audit: BPF prog-id=164 op=UNLOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.281000 audit: BPF prog-id=163 op=UNLOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.281000 audit: BPF prog-id=165 op=LOAD Jan 23 05:39:40.281000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3234 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:40.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564323131653931666265393838373763353466653065366239623639 Jan 23 05:39:40.344310 containerd[1597]: time="2026-01-23T05:39:40.343978912Z" level=info msg="StartContainer for \"5d211e91fbe98877c54fe0e6b9b69b6be3f17244dc20b9564be9d98e392a8d82\" returns successfully" Jan 23 05:39:41.215459 kubelet[2784]: E0123 05:39:41.215373 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:41.245949 kubelet[2784]: I0123 05:39:41.245457 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fcf599d59-js5jr" podStartSLOduration=1.930287405 podStartE2EDuration="3.245433008s" podCreationTimestamp="2026-01-23 05:39:38 +0000 UTC" firstStartedPulling="2026-01-23 05:39:38.788693962 +0000 UTC m=+18.928287313" lastFinishedPulling="2026-01-23 05:39:40.103839716 +0000 UTC m=+20.243432916" observedRunningTime="2026-01-23 05:39:41.230472863 +0000 UTC m=+21.370066095" watchObservedRunningTime="2026-01-23 05:39:41.245433008 +0000 UTC m=+21.385026229" Jan 23 05:39:41.278000 audit[3444]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:41.278000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff703aa8a0 a2=0 a3=7fff703aa88c items=0 ppid=2961 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:41.296000 audit[3444]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:41.296000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff703aa8a0 a2=0 a3=7fff703aa88c items=0 ppid=2961 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:41.303575 kubelet[2784]: E0123 05:39:41.303516 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.303771 kubelet[2784]: W0123 05:39:41.303574 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.303771 kubelet[2784]: E0123 05:39:41.303607 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.304188 kubelet[2784]: E0123 05:39:41.304110 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.304188 kubelet[2784]: W0123 05:39:41.304169 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.304188 kubelet[2784]: E0123 05:39:41.304200 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.304633 kubelet[2784]: E0123 05:39:41.304578 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.304633 kubelet[2784]: W0123 05:39:41.304609 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.304633 kubelet[2784]: E0123 05:39:41.304622 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.305130 kubelet[2784]: E0123 05:39:41.305011 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.305130 kubelet[2784]: W0123 05:39:41.305041 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.305130 kubelet[2784]: E0123 05:39:41.305102 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.305483 kubelet[2784]: E0123 05:39:41.305424 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.305483 kubelet[2784]: W0123 05:39:41.305455 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.305483 kubelet[2784]: E0123 05:39:41.305466 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.305854 kubelet[2784]: E0123 05:39:41.305794 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.305854 kubelet[2784]: W0123 05:39:41.305824 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.305854 kubelet[2784]: E0123 05:39:41.305836 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.306218 kubelet[2784]: E0123 05:39:41.306183 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.306218 kubelet[2784]: W0123 05:39:41.306210 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.306349 kubelet[2784]: E0123 05:39:41.306223 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.306542 kubelet[2784]: E0123 05:39:41.306512 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.306542 kubelet[2784]: W0123 05:39:41.306536 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.306633 kubelet[2784]: E0123 05:39:41.306551 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.307026 kubelet[2784]: E0123 05:39:41.306996 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.307026 kubelet[2784]: W0123 05:39:41.307019 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.307268 kubelet[2784]: E0123 05:39:41.307033 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.307496 kubelet[2784]: E0123 05:39:41.307465 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.307496 kubelet[2784]: W0123 05:39:41.307489 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.307586 kubelet[2784]: E0123 05:39:41.307502 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.307843 kubelet[2784]: E0123 05:39:41.307813 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.307843 kubelet[2784]: W0123 05:39:41.307837 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.307930 kubelet[2784]: E0123 05:39:41.307850 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.308202 kubelet[2784]: E0123 05:39:41.308169 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.308202 kubelet[2784]: W0123 05:39:41.308196 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.308293 kubelet[2784]: E0123 05:39:41.308209 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.308686 kubelet[2784]: E0123 05:39:41.308583 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.308686 kubelet[2784]: W0123 05:39:41.308615 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.308686 kubelet[2784]: E0123 05:39:41.308631 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.309048 kubelet[2784]: E0123 05:39:41.308996 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.309048 kubelet[2784]: W0123 05:39:41.309024 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.309048 kubelet[2784]: E0123 05:39:41.309039 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.309403 kubelet[2784]: E0123 05:39:41.309370 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.309403 kubelet[2784]: W0123 05:39:41.309395 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.309480 kubelet[2784]: E0123 05:39:41.309409 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.330372 kubelet[2784]: E0123 05:39:41.330241 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.330372 kubelet[2784]: W0123 05:39:41.330342 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.330372 kubelet[2784]: E0123 05:39:41.330371 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.330930 kubelet[2784]: E0123 05:39:41.330893 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.330930 kubelet[2784]: W0123 05:39:41.330909 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.331015 kubelet[2784]: E0123 05:39:41.330963 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.331409 kubelet[2784]: E0123 05:39:41.331379 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.331456 kubelet[2784]: W0123 05:39:41.331409 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.331502 kubelet[2784]: E0123 05:39:41.331460 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.332168 kubelet[2784]: E0123 05:39:41.332138 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.332225 kubelet[2784]: W0123 05:39:41.332169 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.332258 kubelet[2784]: E0123 05:39:41.332221 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.332933 kubelet[2784]: E0123 05:39:41.332906 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.332978 kubelet[2784]: W0123 05:39:41.332933 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.333026 kubelet[2784]: E0123 05:39:41.333002 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.333679 kubelet[2784]: E0123 05:39:41.333618 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.333732 kubelet[2784]: W0123 05:39:41.333680 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.333847 kubelet[2784]: E0123 05:39:41.333806 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.333967 kubelet[2784]: E0123 05:39:41.333942 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.334006 kubelet[2784]: W0123 05:39:41.333967 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.334263 kubelet[2784]: E0123 05:39:41.334162 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.334449 kubelet[2784]: E0123 05:39:41.334398 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.334449 kubelet[2784]: W0123 05:39:41.334434 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.334525 kubelet[2784]: E0123 05:39:41.334450 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.335295 kubelet[2784]: E0123 05:39:41.335254 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.335295 kubelet[2784]: W0123 05:39:41.335290 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.335383 kubelet[2784]: E0123 05:39:41.335327 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.336090 kubelet[2784]: E0123 05:39:41.335765 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.336090 kubelet[2784]: W0123 05:39:41.335797 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.336090 kubelet[2784]: E0123 05:39:41.335830 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.336326 kubelet[2784]: E0123 05:39:41.336287 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.336370 kubelet[2784]: W0123 05:39:41.336335 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.336457 kubelet[2784]: E0123 05:39:41.336413 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.336745 kubelet[2784]: E0123 05:39:41.336707 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.336745 kubelet[2784]: W0123 05:39:41.336741 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.336821 kubelet[2784]: E0123 05:39:41.336773 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.337484 kubelet[2784]: E0123 05:39:41.337425 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.337544 kubelet[2784]: W0123 05:39:41.337501 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.337582 kubelet[2784]: E0123 05:39:41.337538 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.337940 kubelet[2784]: E0123 05:39:41.337915 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.337940 kubelet[2784]: W0123 05:39:41.337937 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.338126 kubelet[2784]: E0123 05:39:41.338093 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.338729 kubelet[2784]: E0123 05:39:41.338709 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.338729 kubelet[2784]: W0123 05:39:41.338723 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.338801 kubelet[2784]: E0123 05:39:41.338740 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.339560 kubelet[2784]: E0123 05:39:41.339533 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.339560 kubelet[2784]: W0123 05:39:41.339558 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.339892 kubelet[2784]: E0123 05:39:41.339818 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.339937 kubelet[2784]: E0123 05:39:41.339926 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.339977 kubelet[2784]: W0123 05:39:41.339935 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.339977 kubelet[2784]: E0123 05:39:41.339946 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.340809 kubelet[2784]: E0123 05:39:41.340769 2784 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 05:39:41.340809 kubelet[2784]: W0123 05:39:41.340806 2784 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 05:39:41.340893 kubelet[2784]: E0123 05:39:41.340822 2784 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 05:39:41.403492 containerd[1597]: time="2026-01-23T05:39:41.403404158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:41.404885 containerd[1597]: time="2026-01-23T05:39:41.404823067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 05:39:41.406420 containerd[1597]: time="2026-01-23T05:39:41.406358294Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:41.409863 containerd[1597]: time="2026-01-23T05:39:41.409800885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:41.410534 containerd[1597]: time="2026-01-23T05:39:41.410442939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.306140551s" Jan 23 05:39:41.410534 containerd[1597]: time="2026-01-23T05:39:41.410497451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 05:39:41.413682 containerd[1597]: time="2026-01-23T05:39:41.413248259Z" level=info msg="CreateContainer within sandbox \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 05:39:41.426184 containerd[1597]: time="2026-01-23T05:39:41.426011356Z" level=info msg="Container adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:41.436697 containerd[1597]: time="2026-01-23T05:39:41.436571191Z" level=info msg="CreateContainer within sandbox \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b\"" Jan 23 05:39:41.437503 containerd[1597]: time="2026-01-23T05:39:41.437417050Z" level=info msg="StartContainer for \"adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b\"" Jan 23 05:39:41.439735 containerd[1597]: time="2026-01-23T05:39:41.439609564Z" level=info msg="connecting to shim adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b" address="unix:///run/containerd/s/526fe53f6def618fad665ab3eb820a91fafb6bb9db886b86d41f4faa67ae2bad" protocol=ttrpc version=3 Jan 23 05:39:41.482463 systemd[1]: Started cri-containerd-adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b.scope - libcontainer container adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b. Jan 23 05:39:41.588000 audit: BPF prog-id=166 op=LOAD Jan 23 05:39:41.588000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3352 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164633838333066313262333261656466303139613763636632306365 Jan 23 05:39:41.589000 audit: BPF prog-id=167 op=LOAD Jan 23 05:39:41.589000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3352 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164633838333066313262333261656466303139613763636632306365 Jan 23 05:39:41.589000 audit: BPF prog-id=167 op=UNLOAD Jan 23 05:39:41.589000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164633838333066313262333261656466303139613763636632306365 Jan 23 05:39:41.589000 audit: BPF prog-id=166 op=UNLOAD Jan 23 05:39:41.589000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164633838333066313262333261656466303139613763636632306365 Jan 23 05:39:41.589000 audit: BPF prog-id=168 op=LOAD Jan 23 05:39:41.589000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3352 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:41.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164633838333066313262333261656466303139613763636632306365 Jan 23 05:39:41.622139 containerd[1597]: time="2026-01-23T05:39:41.622044672Z" level=info msg="StartContainer for \"adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b\" returns successfully" Jan 23 05:39:41.642827 systemd[1]: cri-containerd-adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b.scope: Deactivated successfully. Jan 23 05:39:41.648498 containerd[1597]: time="2026-01-23T05:39:41.648423222Z" level=info msg="received container exit event container_id:\"adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b\" id:\"adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b\" pid:3496 exited_at:{seconds:1769146781 nanos:645540318}" Jan 23 05:39:41.649000 audit: BPF prog-id=168 op=UNLOAD Jan 23 05:39:41.695292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc8830f12b32aedf019a7ccf20ce6e1d4bd5d8d2a76b4c675289a827a6c9d0b-rootfs.mount: Deactivated successfully. Jan 23 05:39:41.992208 kubelet[2784]: E0123 05:39:41.991986 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:39:42.220150 kubelet[2784]: E0123 05:39:42.219837 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:42.220150 kubelet[2784]: E0123 05:39:42.219919 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:42.221107 containerd[1597]: time="2026-01-23T05:39:42.220984207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 05:39:43.223432 kubelet[2784]: E0123 05:39:43.223376 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:43.993440 kubelet[2784]: E0123 05:39:43.992401 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:39:44.410233 containerd[1597]: time="2026-01-23T05:39:44.410145893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:44.411192 containerd[1597]: time="2026-01-23T05:39:44.411130629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 23 05:39:44.412809 containerd[1597]: time="2026-01-23T05:39:44.412741696Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:44.416218 containerd[1597]: time="2026-01-23T05:39:44.416040668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:44.416917 containerd[1597]: time="2026-01-23T05:39:44.416761121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.195534551s" Jan 23 05:39:44.416917 containerd[1597]: time="2026-01-23T05:39:44.416813508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 05:39:44.420171 containerd[1597]: time="2026-01-23T05:39:44.420124199Z" level=info msg="CreateContainer within sandbox \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 05:39:44.432187 containerd[1597]: time="2026-01-23T05:39:44.432045879Z" level=info msg="Container 78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:44.448732 containerd[1597]: time="2026-01-23T05:39:44.448236150Z" level=info msg="CreateContainer within sandbox \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a\"" Jan 23 05:39:44.450025 containerd[1597]: time="2026-01-23T05:39:44.449793487Z" level=info msg="StartContainer for \"78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a\"" Jan 23 05:39:44.451979 containerd[1597]: time="2026-01-23T05:39:44.451930056Z" level=info msg="connecting to shim 78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a" address="unix:///run/containerd/s/526fe53f6def618fad665ab3eb820a91fafb6bb9db886b86d41f4faa67ae2bad" protocol=ttrpc version=3 Jan 23 05:39:44.505313 systemd[1]: Started cri-containerd-78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a.scope - libcontainer container 78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a. Jan 23 05:39:44.606000 audit: BPF prog-id=169 op=LOAD Jan 23 05:39:44.609977 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 23 05:39:44.610130 kernel: audit: type=1334 audit(1769146784.606:561): prog-id=169 op=LOAD Jan 23 05:39:44.606000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.620625 kernel: audit: type=1300 audit(1769146784.606:561): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.620817 kernel: audit: type=1327 audit(1769146784.606:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.607000 audit: BPF prog-id=170 op=LOAD Jan 23 05:39:44.607000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.641132 kernel: audit: type=1334 audit(1769146784.607:562): prog-id=170 op=LOAD Jan 23 05:39:44.641230 kernel: audit: type=1300 audit(1769146784.607:562): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.652013 kernel: audit: type=1327 audit(1769146784.607:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.607000 audit: BPF prog-id=170 op=UNLOAD Jan 23 05:39:44.607000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.667888 kernel: audit: type=1334 audit(1769146784.607:563): prog-id=170 op=UNLOAD Jan 23 05:39:44.668027 kernel: audit: type=1300 audit(1769146784.607:563): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.668155 kernel: audit: type=1327 audit(1769146784.607:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.671184 containerd[1597]: time="2026-01-23T05:39:44.670923028Z" level=info msg="StartContainer for \"78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a\" returns successfully" Jan 23 05:39:44.607000 audit: BPF prog-id=169 op=UNLOAD Jan 23 05:39:44.686829 kernel: audit: type=1334 audit(1769146784.607:564): prog-id=169 op=UNLOAD Jan 23 05:39:44.607000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:44.607000 audit: BPF prog-id=171 op=LOAD Jan 23 05:39:44.607000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3352 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:44.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738616433343062633331323962633261646431326539386662363638 Jan 23 05:39:45.243788 kubelet[2784]: E0123 05:39:45.242787 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:45.546746 systemd[1]: cri-containerd-78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a.scope: Deactivated successfully. Jan 23 05:39:45.547149 systemd[1]: cri-containerd-78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a.scope: Consumed 893ms CPU time, 175.9M memory peak, 4.9M read from disk, 171.3M written to disk. Jan 23 05:39:45.552685 containerd[1597]: time="2026-01-23T05:39:45.552588151Z" level=info msg="received container exit event container_id:\"78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a\" id:\"78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a\" pid:3558 exited_at:{seconds:1769146785 nanos:548809978}" Jan 23 05:39:45.556000 audit: BPF prog-id=171 op=UNLOAD Jan 23 05:39:45.593280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ad340bc3129bc2add12e98fb6686fc435e8194451db02896e42556365a934a-rootfs.mount: Deactivated successfully. Jan 23 05:39:45.612442 kubelet[2784]: I0123 05:39:45.612313 2784 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 05:39:45.672870 systemd[1]: Created slice kubepods-besteffort-pod03afe386_d286_45e1_b2d1_9d888b5a436b.slice - libcontainer container kubepods-besteffort-pod03afe386_d286_45e1_b2d1_9d888b5a436b.slice. Jan 23 05:39:45.686530 systemd[1]: Created slice kubepods-besteffort-pod3cdbf9fd_0ae3_408e_ba62_8b7474385dec.slice - libcontainer container kubepods-besteffort-pod3cdbf9fd_0ae3_408e_ba62_8b7474385dec.slice. Jan 23 05:39:45.689466 kubelet[2784]: I0123 05:39:45.689434 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrcs\" (UniqueName: \"kubernetes.io/projected/03afe386-d286-45e1-b2d1-9d888b5a436b-kube-api-access-hwrcs\") pod \"calico-apiserver-68f956777f-b4hm5\" (UID: \"03afe386-d286-45e1-b2d1-9d888b5a436b\") " pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" Jan 23 05:39:45.689766 kubelet[2784]: I0123 05:39:45.689718 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/03afe386-d286-45e1-b2d1-9d888b5a436b-calico-apiserver-certs\") pod \"calico-apiserver-68f956777f-b4hm5\" (UID: \"03afe386-d286-45e1-b2d1-9d888b5a436b\") " pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" Jan 23 05:39:45.698262 systemd[1]: Created slice kubepods-besteffort-pod3912831c_901b_4041_9115_637bb8679bc2.slice - libcontainer container kubepods-besteffort-pod3912831c_901b_4041_9115_637bb8679bc2.slice. Jan 23 05:39:45.710632 systemd[1]: Created slice kubepods-besteffort-podd0e779d4_f949_4480_8be6_1410e9d1d223.slice - libcontainer container kubepods-besteffort-podd0e779d4_f949_4480_8be6_1410e9d1d223.slice. Jan 23 05:39:45.724518 systemd[1]: Created slice kubepods-besteffort-podf2986fef_16f6_4f5c_ada1_3406bc086cb8.slice - libcontainer container kubepods-besteffort-podf2986fef_16f6_4f5c_ada1_3406bc086cb8.slice. Jan 23 05:39:45.736511 systemd[1]: Created slice kubepods-burstable-pod055be855_4fc5_42d5_be74_896584a97ac5.slice - libcontainer container kubepods-burstable-pod055be855_4fc5_42d5_be74_896584a97ac5.slice. Jan 23 05:39:45.749423 systemd[1]: Created slice kubepods-burstable-poda6ad78ea_276e_4c01_82bf_ff4fc9289c99.slice - libcontainer container kubepods-burstable-poda6ad78ea_276e_4c01_82bf_ff4fc9289c99.slice. Jan 23 05:39:45.762140 systemd[1]: Created slice kubepods-besteffort-pod9393f034_f86c_4875_9435_8f85b0225d78.slice - libcontainer container kubepods-besteffort-pod9393f034_f86c_4875_9435_8f85b0225d78.slice. Jan 23 05:39:45.791217 kubelet[2784]: I0123 05:39:45.791150 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3912831c-901b-4041-9115-637bb8679bc2-calico-apiserver-certs\") pod \"calico-apiserver-fb784f8d9-hrfdf\" (UID: \"3912831c-901b-4041-9115-637bb8679bc2\") " pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" Jan 23 05:39:45.791217 kubelet[2784]: I0123 05:39:45.791222 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqb4j\" (UniqueName: \"kubernetes.io/projected/a6ad78ea-276e-4c01-82bf-ff4fc9289c99-kube-api-access-hqb4j\") pod \"coredns-668d6bf9bc-mdl4l\" (UID: \"a6ad78ea-276e-4c01-82bf-ff4fc9289c99\") " pod="kube-system/coredns-668d6bf9bc-mdl4l" Jan 23 05:39:45.791400 kubelet[2784]: I0123 05:39:45.791246 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcw9\" (UniqueName: \"kubernetes.io/projected/9393f034-f86c-4875-9435-8f85b0225d78-kube-api-access-lhcw9\") pod \"calico-apiserver-68f956777f-dhm59\" (UID: \"9393f034-f86c-4875-9435-8f85b0225d78\") " pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" Jan 23 05:39:45.791400 kubelet[2784]: I0123 05:39:45.791267 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-686jd\" (UniqueName: \"kubernetes.io/projected/3cdbf9fd-0ae3-408e-ba62-8b7474385dec-kube-api-access-686jd\") pod \"calico-kube-controllers-6986698974-zhprj\" (UID: \"3cdbf9fd-0ae3-408e-ba62-8b7474385dec\") " pod="calico-system/calico-kube-controllers-6986698974-zhprj" Jan 23 05:39:45.791400 kubelet[2784]: I0123 05:39:45.791287 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-backend-key-pair\") pod \"whisker-5755b9bbc-mpw6q\" (UID: \"d0e779d4-f949-4480-8be6-1410e9d1d223\") " pod="calico-system/whisker-5755b9bbc-mpw6q" Jan 23 05:39:45.791400 kubelet[2784]: I0123 05:39:45.791306 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-ca-bundle\") pod \"whisker-5755b9bbc-mpw6q\" (UID: \"d0e779d4-f949-4480-8be6-1410e9d1d223\") " pod="calico-system/whisker-5755b9bbc-mpw6q" Jan 23 05:39:45.791400 kubelet[2784]: I0123 05:39:45.791319 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cdbf9fd-0ae3-408e-ba62-8b7474385dec-tigera-ca-bundle\") pod \"calico-kube-controllers-6986698974-zhprj\" (UID: \"3cdbf9fd-0ae3-408e-ba62-8b7474385dec\") " pod="calico-system/calico-kube-controllers-6986698974-zhprj" Jan 23 05:39:45.791572 kubelet[2784]: I0123 05:39:45.791336 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2986fef-16f6-4f5c-ada1-3406bc086cb8-config\") pod \"goldmane-666569f655-56x2c\" (UID: \"f2986fef-16f6-4f5c-ada1-3406bc086cb8\") " pod="calico-system/goldmane-666569f655-56x2c" Jan 23 05:39:45.791572 kubelet[2784]: I0123 05:39:45.791352 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgb5v\" (UniqueName: \"kubernetes.io/projected/f2986fef-16f6-4f5c-ada1-3406bc086cb8-kube-api-access-xgb5v\") pod \"goldmane-666569f655-56x2c\" (UID: \"f2986fef-16f6-4f5c-ada1-3406bc086cb8\") " pod="calico-system/goldmane-666569f655-56x2c" Jan 23 05:39:45.791572 kubelet[2784]: I0123 05:39:45.791371 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9393f034-f86c-4875-9435-8f85b0225d78-calico-apiserver-certs\") pod \"calico-apiserver-68f956777f-dhm59\" (UID: \"9393f034-f86c-4875-9435-8f85b0225d78\") " pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" Jan 23 05:39:45.791572 kubelet[2784]: I0123 05:39:45.791384 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpbg\" (UniqueName: \"kubernetes.io/projected/3912831c-901b-4041-9115-637bb8679bc2-kube-api-access-vrpbg\") pod \"calico-apiserver-fb784f8d9-hrfdf\" (UID: \"3912831c-901b-4041-9115-637bb8679bc2\") " pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" Jan 23 05:39:45.791572 kubelet[2784]: I0123 05:39:45.791410 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f2986fef-16f6-4f5c-ada1-3406bc086cb8-goldmane-key-pair\") pod \"goldmane-666569f655-56x2c\" (UID: \"f2986fef-16f6-4f5c-ada1-3406bc086cb8\") " pod="calico-system/goldmane-666569f655-56x2c" Jan 23 05:39:45.791729 kubelet[2784]: I0123 05:39:45.791431 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6ad78ea-276e-4c01-82bf-ff4fc9289c99-config-volume\") pod \"coredns-668d6bf9bc-mdl4l\" (UID: \"a6ad78ea-276e-4c01-82bf-ff4fc9289c99\") " pod="kube-system/coredns-668d6bf9bc-mdl4l" Jan 23 05:39:45.791729 kubelet[2784]: I0123 05:39:45.791447 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2986fef-16f6-4f5c-ada1-3406bc086cb8-goldmane-ca-bundle\") pod \"goldmane-666569f655-56x2c\" (UID: \"f2986fef-16f6-4f5c-ada1-3406bc086cb8\") " pod="calico-system/goldmane-666569f655-56x2c" Jan 23 05:39:45.791729 kubelet[2784]: I0123 05:39:45.791523 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/055be855-4fc5-42d5-be74-896584a97ac5-config-volume\") pod \"coredns-668d6bf9bc-47b87\" (UID: \"055be855-4fc5-42d5-be74-896584a97ac5\") " pod="kube-system/coredns-668d6bf9bc-47b87" Jan 23 05:39:45.791729 kubelet[2784]: I0123 05:39:45.791550 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq58c\" (UniqueName: \"kubernetes.io/projected/055be855-4fc5-42d5-be74-896584a97ac5-kube-api-access-qq58c\") pod \"coredns-668d6bf9bc-47b87\" (UID: \"055be855-4fc5-42d5-be74-896584a97ac5\") " pod="kube-system/coredns-668d6bf9bc-47b87" Jan 23 05:39:45.791729 kubelet[2784]: I0123 05:39:45.791596 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drf7g\" (UniqueName: \"kubernetes.io/projected/d0e779d4-f949-4480-8be6-1410e9d1d223-kube-api-access-drf7g\") pod \"whisker-5755b9bbc-mpw6q\" (UID: \"d0e779d4-f949-4480-8be6-1410e9d1d223\") " pod="calico-system/whisker-5755b9bbc-mpw6q" Jan 23 05:39:45.980455 containerd[1597]: time="2026-01-23T05:39:45.980331935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-b4hm5,Uid:03afe386-d286-45e1-b2d1-9d888b5a436b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 05:39:45.994374 containerd[1597]: time="2026-01-23T05:39:45.994204084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6986698974-zhprj,Uid:3cdbf9fd-0ae3-408e-ba62-8b7474385dec,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:46.003174 systemd[1]: Created slice kubepods-besteffort-pod5e85a0d3_5d32_4d8a_b91c_0a641948fd22.slice - libcontainer container kubepods-besteffort-pod5e85a0d3_5d32_4d8a_b91c_0a641948fd22.slice. Jan 23 05:39:46.008323 containerd[1597]: time="2026-01-23T05:39:46.008233142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb784f8d9-hrfdf,Uid:3912831c-901b-4041-9115-637bb8679bc2,Namespace:calico-apiserver,Attempt:0,}" Jan 23 05:39:46.010265 containerd[1597]: time="2026-01-23T05:39:46.010225948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpddq,Uid:5e85a0d3-5d32-4d8a-b91c-0a641948fd22,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:46.019752 containerd[1597]: time="2026-01-23T05:39:46.019318728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5755b9bbc-mpw6q,Uid:d0e779d4-f949-4480-8be6-1410e9d1d223,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:46.030792 containerd[1597]: time="2026-01-23T05:39:46.030722591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-56x2c,Uid:f2986fef-16f6-4f5c-ada1-3406bc086cb8,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:46.044768 kubelet[2784]: E0123 05:39:46.044712 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:46.047555 containerd[1597]: time="2026-01-23T05:39:46.047506853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-47b87,Uid:055be855-4fc5-42d5-be74-896584a97ac5,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:46.058769 kubelet[2784]: E0123 05:39:46.058640 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:46.062285 containerd[1597]: time="2026-01-23T05:39:46.062199108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mdl4l,Uid:a6ad78ea-276e-4c01-82bf-ff4fc9289c99,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:46.077541 containerd[1597]: time="2026-01-23T05:39:46.077123525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-dhm59,Uid:9393f034-f86c-4875-9435-8f85b0225d78,Namespace:calico-apiserver,Attempt:0,}" Jan 23 05:39:46.255970 containerd[1597]: time="2026-01-23T05:39:46.255227456Z" level=error msg="Failed to destroy network for sandbox \"2dc8545224f27fbf70e727561770e144199f893ba3d51de07cc3d47306bd75bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.272466 containerd[1597]: time="2026-01-23T05:39:46.272254042Z" level=error msg="Failed to destroy network for sandbox \"199733cda197e41edf8b474b6160e4f05d0fcea8e1f2e2f30cf65262813ee1b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.278586 containerd[1597]: time="2026-01-23T05:39:46.278419866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-b4hm5,Uid:03afe386-d286-45e1-b2d1-9d888b5a436b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc8545224f27fbf70e727561770e144199f893ba3d51de07cc3d47306bd75bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.279527 kubelet[2784]: E0123 05:39:46.279481 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc8545224f27fbf70e727561770e144199f893ba3d51de07cc3d47306bd75bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.280467 kubelet[2784]: E0123 05:39:46.280369 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc8545224f27fbf70e727561770e144199f893ba3d51de07cc3d47306bd75bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" Jan 23 05:39:46.280467 kubelet[2784]: E0123 05:39:46.280423 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc8545224f27fbf70e727561770e144199f893ba3d51de07cc3d47306bd75bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" Jan 23 05:39:46.280727 kubelet[2784]: E0123 05:39:46.280513 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68f956777f-b4hm5_calico-apiserver(03afe386-d286-45e1-b2d1-9d888b5a436b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68f956777f-b4hm5_calico-apiserver(03afe386-d286-45e1-b2d1-9d888b5a436b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dc8545224f27fbf70e727561770e144199f893ba3d51de07cc3d47306bd75bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:39:46.281297 containerd[1597]: time="2026-01-23T05:39:46.281231869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb784f8d9-hrfdf,Uid:3912831c-901b-4041-9115-637bb8679bc2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"199733cda197e41edf8b474b6160e4f05d0fcea8e1f2e2f30cf65262813ee1b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.282293 kubelet[2784]: E0123 05:39:46.281474 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199733cda197e41edf8b474b6160e4f05d0fcea8e1f2e2f30cf65262813ee1b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.282293 kubelet[2784]: E0123 05:39:46.281512 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199733cda197e41edf8b474b6160e4f05d0fcea8e1f2e2f30cf65262813ee1b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" Jan 23 05:39:46.282293 kubelet[2784]: E0123 05:39:46.281534 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"199733cda197e41edf8b474b6160e4f05d0fcea8e1f2e2f30cf65262813ee1b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" Jan 23 05:39:46.282425 kubelet[2784]: E0123 05:39:46.281574 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fb784f8d9-hrfdf_calico-apiserver(3912831c-901b-4041-9115-637bb8679bc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fb784f8d9-hrfdf_calico-apiserver(3912831c-901b-4041-9115-637bb8679bc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"199733cda197e41edf8b474b6160e4f05d0fcea8e1f2e2f30cf65262813ee1b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:39:46.284442 kubelet[2784]: E0123 05:39:46.284407 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:46.287918 containerd[1597]: time="2026-01-23T05:39:46.286896320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 05:39:46.294815 containerd[1597]: time="2026-01-23T05:39:46.294715028Z" level=error msg="Failed to destroy network for sandbox \"a9c4fc7f392d67972cff056d18b4e338a6beedc2de0d279a9a7e40d255a594df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.301701 containerd[1597]: time="2026-01-23T05:39:46.301575877Z" level=error msg="Failed to destroy network for sandbox \"6d45ae7a0dbb2bfb9f1a7cea7e76f552b60ce0c87a291a73286c769a01efc5b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.303456 containerd[1597]: time="2026-01-23T05:39:46.303386318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6986698974-zhprj,Uid:3cdbf9fd-0ae3-408e-ba62-8b7474385dec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c4fc7f392d67972cff056d18b4e338a6beedc2de0d279a9a7e40d255a594df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.304090 kubelet[2784]: E0123 05:39:46.303925 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c4fc7f392d67972cff056d18b4e338a6beedc2de0d279a9a7e40d255a594df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.304154 kubelet[2784]: E0123 05:39:46.304105 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c4fc7f392d67972cff056d18b4e338a6beedc2de0d279a9a7e40d255a594df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6986698974-zhprj" Jan 23 05:39:46.304244 kubelet[2784]: E0123 05:39:46.304148 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9c4fc7f392d67972cff056d18b4e338a6beedc2de0d279a9a7e40d255a594df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6986698974-zhprj" Jan 23 05:39:46.304315 kubelet[2784]: E0123 05:39:46.304263 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6986698974-zhprj_calico-system(3cdbf9fd-0ae3-408e-ba62-8b7474385dec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6986698974-zhprj_calico-system(3cdbf9fd-0ae3-408e-ba62-8b7474385dec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9c4fc7f392d67972cff056d18b4e338a6beedc2de0d279a9a7e40d255a594df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:39:46.308732 containerd[1597]: time="2026-01-23T05:39:46.308582419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpddq,Uid:5e85a0d3-5d32-4d8a-b91c-0a641948fd22,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d45ae7a0dbb2bfb9f1a7cea7e76f552b60ce0c87a291a73286c769a01efc5b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.309347 kubelet[2784]: E0123 05:39:46.308946 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d45ae7a0dbb2bfb9f1a7cea7e76f552b60ce0c87a291a73286c769a01efc5b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.309347 kubelet[2784]: E0123 05:39:46.309032 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d45ae7a0dbb2bfb9f1a7cea7e76f552b60ce0c87a291a73286c769a01efc5b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:46.309347 kubelet[2784]: E0123 05:39:46.309140 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d45ae7a0dbb2bfb9f1a7cea7e76f552b60ce0c87a291a73286c769a01efc5b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rpddq" Jan 23 05:39:46.309505 kubelet[2784]: E0123 05:39:46.309212 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d45ae7a0dbb2bfb9f1a7cea7e76f552b60ce0c87a291a73286c769a01efc5b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:39:46.343840 containerd[1597]: time="2026-01-23T05:39:46.343758624Z" level=error msg="Failed to destroy network for sandbox \"c7d03a47751b12ac8f4d5a574cafaf7ceaa654746a8b0946590148058055eac5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.347887 containerd[1597]: time="2026-01-23T05:39:46.347754074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mdl4l,Uid:a6ad78ea-276e-4c01-82bf-ff4fc9289c99,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d03a47751b12ac8f4d5a574cafaf7ceaa654746a8b0946590148058055eac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.348271 kubelet[2784]: E0123 05:39:46.348209 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d03a47751b12ac8f4d5a574cafaf7ceaa654746a8b0946590148058055eac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.348357 kubelet[2784]: E0123 05:39:46.348304 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d03a47751b12ac8f4d5a574cafaf7ceaa654746a8b0946590148058055eac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mdl4l" Jan 23 05:39:46.348357 kubelet[2784]: E0123 05:39:46.348340 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7d03a47751b12ac8f4d5a574cafaf7ceaa654746a8b0946590148058055eac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mdl4l" Jan 23 05:39:46.348457 kubelet[2784]: E0123 05:39:46.348400 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mdl4l_kube-system(a6ad78ea-276e-4c01-82bf-ff4fc9289c99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mdl4l_kube-system(a6ad78ea-276e-4c01-82bf-ff4fc9289c99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7d03a47751b12ac8f4d5a574cafaf7ceaa654746a8b0946590148058055eac5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mdl4l" podUID="a6ad78ea-276e-4c01-82bf-ff4fc9289c99" Jan 23 05:39:46.349894 containerd[1597]: time="2026-01-23T05:39:46.349532255Z" level=error msg="Failed to destroy network for sandbox \"e0acc7bf3a969ef7f0a12b9fc3297793888ea6a298ece23f21f7165c5cfdfab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.353205 containerd[1597]: time="2026-01-23T05:39:46.353118529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-56x2c,Uid:f2986fef-16f6-4f5c-ada1-3406bc086cb8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0acc7bf3a969ef7f0a12b9fc3297793888ea6a298ece23f21f7165c5cfdfab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.353539 kubelet[2784]: E0123 05:39:46.353482 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0acc7bf3a969ef7f0a12b9fc3297793888ea6a298ece23f21f7165c5cfdfab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.353808 kubelet[2784]: E0123 05:39:46.353714 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0acc7bf3a969ef7f0a12b9fc3297793888ea6a298ece23f21f7165c5cfdfab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-56x2c" Jan 23 05:39:46.353808 kubelet[2784]: E0123 05:39:46.353757 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0acc7bf3a969ef7f0a12b9fc3297793888ea6a298ece23f21f7165c5cfdfab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-56x2c" Jan 23 05:39:46.354152 kubelet[2784]: E0123 05:39:46.354110 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-56x2c_calico-system(f2986fef-16f6-4f5c-ada1-3406bc086cb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-56x2c_calico-system(f2986fef-16f6-4f5c-ada1-3406bc086cb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0acc7bf3a969ef7f0a12b9fc3297793888ea6a298ece23f21f7165c5cfdfab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:39:46.360817 containerd[1597]: time="2026-01-23T05:39:46.360702163Z" level=error msg="Failed to destroy network for sandbox \"1c8712f2b8e5b1753810b7b8b6197b6bf05ee92698fa3304e42b26688065f92a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.366481 containerd[1597]: time="2026-01-23T05:39:46.366415780Z" level=error msg="Failed to destroy network for sandbox \"f57d29e425aa4139d4461cc2a23fc314efc4dd717975f42fe1d1163eaf756c76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.376984 containerd[1597]: time="2026-01-23T05:39:46.376822761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-dhm59,Uid:9393f034-f86c-4875-9435-8f85b0225d78,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d29e425aa4139d4461cc2a23fc314efc4dd717975f42fe1d1163eaf756c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.377279 kubelet[2784]: E0123 05:39:46.377225 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d29e425aa4139d4461cc2a23fc314efc4dd717975f42fe1d1163eaf756c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.377344 kubelet[2784]: E0123 05:39:46.377308 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d29e425aa4139d4461cc2a23fc314efc4dd717975f42fe1d1163eaf756c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" Jan 23 05:39:46.377405 kubelet[2784]: E0123 05:39:46.377341 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f57d29e425aa4139d4461cc2a23fc314efc4dd717975f42fe1d1163eaf756c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" Jan 23 05:39:46.377457 kubelet[2784]: E0123 05:39:46.377419 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68f956777f-dhm59_calico-apiserver(9393f034-f86c-4875-9435-8f85b0225d78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68f956777f-dhm59_calico-apiserver(9393f034-f86c-4875-9435-8f85b0225d78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f57d29e425aa4139d4461cc2a23fc314efc4dd717975f42fe1d1163eaf756c76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:39:46.379003 containerd[1597]: time="2026-01-23T05:39:46.378783552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5755b9bbc-mpw6q,Uid:d0e779d4-f949-4480-8be6-1410e9d1d223,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8712f2b8e5b1753810b7b8b6197b6bf05ee92698fa3304e42b26688065f92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.379531 kubelet[2784]: E0123 05:39:46.379419 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8712f2b8e5b1753810b7b8b6197b6bf05ee92698fa3304e42b26688065f92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.379531 kubelet[2784]: E0123 05:39:46.379465 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8712f2b8e5b1753810b7b8b6197b6bf05ee92698fa3304e42b26688065f92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5755b9bbc-mpw6q" Jan 23 05:39:46.379531 kubelet[2784]: E0123 05:39:46.379494 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8712f2b8e5b1753810b7b8b6197b6bf05ee92698fa3304e42b26688065f92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5755b9bbc-mpw6q" Jan 23 05:39:46.379715 kubelet[2784]: E0123 05:39:46.379544 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5755b9bbc-mpw6q_calico-system(d0e779d4-f949-4480-8be6-1410e9d1d223)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5755b9bbc-mpw6q_calico-system(d0e779d4-f949-4480-8be6-1410e9d1d223)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c8712f2b8e5b1753810b7b8b6197b6bf05ee92698fa3304e42b26688065f92a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5755b9bbc-mpw6q" podUID="d0e779d4-f949-4480-8be6-1410e9d1d223" Jan 23 05:39:46.387482 containerd[1597]: time="2026-01-23T05:39:46.387429454Z" level=error msg="Failed to destroy network for sandbox \"12db518bc9657e5e27504736529b2d0aae5d064f73b33802fce6041f1bb0a10e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.390168 containerd[1597]: time="2026-01-23T05:39:46.390123363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-47b87,Uid:055be855-4fc5-42d5-be74-896584a97ac5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12db518bc9657e5e27504736529b2d0aae5d064f73b33802fce6041f1bb0a10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.390485 kubelet[2784]: E0123 05:39:46.390442 2784 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12db518bc9657e5e27504736529b2d0aae5d064f73b33802fce6041f1bb0a10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 05:39:46.390546 kubelet[2784]: E0123 05:39:46.390506 2784 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12db518bc9657e5e27504736529b2d0aae5d064f73b33802fce6041f1bb0a10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-47b87" Jan 23 05:39:46.390546 kubelet[2784]: E0123 05:39:46.390525 2784 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12db518bc9657e5e27504736529b2d0aae5d064f73b33802fce6041f1bb0a10e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-47b87" Jan 23 05:39:46.390686 kubelet[2784]: E0123 05:39:46.390571 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-47b87_kube-system(055be855-4fc5-42d5-be74-896584a97ac5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-47b87_kube-system(055be855-4fc5-42d5-be74-896584a97ac5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12db518bc9657e5e27504736529b2d0aae5d064f73b33802fce6041f1bb0a10e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-47b87" podUID="055be855-4fc5-42d5-be74-896584a97ac5" Jan 23 05:39:51.039139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509873318.mount: Deactivated successfully. Jan 23 05:39:51.248857 containerd[1597]: time="2026-01-23T05:39:51.248543370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:51.250926 containerd[1597]: time="2026-01-23T05:39:51.249650213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 23 05:39:51.252172 containerd[1597]: time="2026-01-23T05:39:51.252023767Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:51.254293 containerd[1597]: time="2026-01-23T05:39:51.254219596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 05:39:51.254957 containerd[1597]: time="2026-01-23T05:39:51.254888974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.96794791s" Jan 23 05:39:51.254957 containerd[1597]: time="2026-01-23T05:39:51.254931783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 05:39:51.278212 containerd[1597]: time="2026-01-23T05:39:51.278147062Z" level=info msg="CreateContainer within sandbox \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 05:39:51.298645 containerd[1597]: time="2026-01-23T05:39:51.298304537Z" level=info msg="Container 73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:51.311599 containerd[1597]: time="2026-01-23T05:39:51.311530161Z" level=info msg="CreateContainer within sandbox \"aa025bb395734f9ab464e8acfe2b6ac03ef378d787dc431be2fdf2ca98a5d1f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5\"" Jan 23 05:39:51.312395 containerd[1597]: time="2026-01-23T05:39:51.312349361Z" level=info msg="StartContainer for \"73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5\"" Jan 23 05:39:51.314454 containerd[1597]: time="2026-01-23T05:39:51.314408936Z" level=info msg="connecting to shim 73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5" address="unix:///run/containerd/s/526fe53f6def618fad665ab3eb820a91fafb6bb9db886b86d41f4faa67ae2bad" protocol=ttrpc version=3 Jan 23 05:39:51.350297 systemd[1]: Started cri-containerd-73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5.scope - libcontainer container 73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5. Jan 23 05:39:51.424000 audit: BPF prog-id=172 op=LOAD Jan 23 05:39:51.427846 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 05:39:51.428032 kernel: audit: type=1334 audit(1769146791.424:567): prog-id=172 op=LOAD Jan 23 05:39:51.424000 audit[3906]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.448452 kernel: audit: type=1300 audit(1769146791.424:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.448548 kernel: audit: type=1327 audit(1769146791.424:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.424000 audit: BPF prog-id=173 op=LOAD Jan 23 05:39:51.451255 kernel: audit: type=1334 audit(1769146791.424:568): prog-id=173 op=LOAD Jan 23 05:39:51.424000 audit[3906]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.461625 kernel: audit: type=1300 audit(1769146791.424:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.461753 kernel: audit: type=1327 audit(1769146791.424:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.473150 kernel: audit: type=1334 audit(1769146791.424:569): prog-id=173 op=UNLOAD Jan 23 05:39:51.424000 audit: BPF prog-id=173 op=UNLOAD Jan 23 05:39:51.474164 kernel: audit: type=1300 audit(1769146791.424:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.424000 audit[3906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.488026 containerd[1597]: time="2026-01-23T05:39:51.487955567Z" level=info msg="StartContainer for \"73e0a875e9d6879399a91fd95ed9b89209612dc9d8f9f1e9751f0b12e148f2e5\" returns successfully" Jan 23 05:39:51.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.500879 kernel: audit: type=1327 audit(1769146791.424:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.500966 kernel: audit: type=1334 audit(1769146791.424:570): prog-id=172 op=UNLOAD Jan 23 05:39:51.424000 audit: BPF prog-id=172 op=UNLOAD Jan 23 05:39:51.424000 audit[3906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.424000 audit: BPF prog-id=174 op=LOAD Jan 23 05:39:51.424000 audit[3906]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3352 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:51.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733653061383735653964363837393339396139316664393565643962 Jan 23 05:39:51.617228 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 05:39:51.617413 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 05:39:51.887198 kubelet[2784]: I0123 05:39:51.887003 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-backend-key-pair\") pod \"d0e779d4-f949-4480-8be6-1410e9d1d223\" (UID: \"d0e779d4-f949-4480-8be6-1410e9d1d223\") " Jan 23 05:39:51.889074 kubelet[2784]: I0123 05:39:51.888986 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drf7g\" (UniqueName: \"kubernetes.io/projected/d0e779d4-f949-4480-8be6-1410e9d1d223-kube-api-access-drf7g\") pod \"d0e779d4-f949-4480-8be6-1410e9d1d223\" (UID: \"d0e779d4-f949-4480-8be6-1410e9d1d223\") " Jan 23 05:39:51.890006 kubelet[2784]: I0123 05:39:51.889171 2784 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-ca-bundle\") pod \"d0e779d4-f949-4480-8be6-1410e9d1d223\" (UID: \"d0e779d4-f949-4480-8be6-1410e9d1d223\") " Jan 23 05:39:51.890759 kubelet[2784]: I0123 05:39:51.890626 2784 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d0e779d4-f949-4480-8be6-1410e9d1d223" (UID: "d0e779d4-f949-4480-8be6-1410e9d1d223"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 05:39:51.894006 kubelet[2784]: I0123 05:39:51.893965 2784 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d0e779d4-f949-4480-8be6-1410e9d1d223" (UID: "d0e779d4-f949-4480-8be6-1410e9d1d223"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 05:39:51.895114 kubelet[2784]: I0123 05:39:51.894894 2784 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e779d4-f949-4480-8be6-1410e9d1d223-kube-api-access-drf7g" (OuterVolumeSpecName: "kube-api-access-drf7g") pod "d0e779d4-f949-4480-8be6-1410e9d1d223" (UID: "d0e779d4-f949-4480-8be6-1410e9d1d223"). InnerVolumeSpecName "kube-api-access-drf7g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 05:39:51.992923 kubelet[2784]: I0123 05:39:51.992775 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drf7g\" (UniqueName: \"kubernetes.io/projected/d0e779d4-f949-4480-8be6-1410e9d1d223-kube-api-access-drf7g\") on node \"localhost\" DevicePath \"\"" Jan 23 05:39:51.992923 kubelet[2784]: I0123 05:39:51.992837 2784 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 05:39:51.992923 kubelet[2784]: I0123 05:39:51.992852 2784 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0e779d4-f949-4480-8be6-1410e9d1d223-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 05:39:52.019953 systemd[1]: Removed slice kubepods-besteffort-podd0e779d4_f949_4480_8be6_1410e9d1d223.slice - libcontainer container kubepods-besteffort-podd0e779d4_f949_4480_8be6_1410e9d1d223.slice. Jan 23 05:39:52.030854 systemd[1]: var-lib-kubelet-pods-d0e779d4\x2df949\x2d4480\x2d8be6\x2d1410e9d1d223-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrf7g.mount: Deactivated successfully. Jan 23 05:39:52.030969 systemd[1]: var-lib-kubelet-pods-d0e779d4\x2df949\x2d4480\x2d8be6\x2d1410e9d1d223-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 05:39:52.317120 kubelet[2784]: E0123 05:39:52.316715 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:52.337380 kubelet[2784]: I0123 05:39:52.337281 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nc4tf" podStartSLOduration=2.018703508 podStartE2EDuration="14.337264166s" podCreationTimestamp="2026-01-23 05:39:38 +0000 UTC" firstStartedPulling="2026-01-23 05:39:38.940037853 +0000 UTC m=+19.079631064" lastFinishedPulling="2026-01-23 05:39:51.25859851 +0000 UTC m=+31.398191722" observedRunningTime="2026-01-23 05:39:52.334983887 +0000 UTC m=+32.474577099" watchObservedRunningTime="2026-01-23 05:39:52.337264166 +0000 UTC m=+32.476857377" Jan 23 05:39:52.408802 systemd[1]: Created slice kubepods-besteffort-pod43eecc48_4e9e_429e_8243_803259cf177c.slice - libcontainer container kubepods-besteffort-pod43eecc48_4e9e_429e_8243_803259cf177c.slice. Jan 23 05:39:52.499386 kubelet[2784]: I0123 05:39:52.499232 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43eecc48-4e9e-429e-8243-803259cf177c-whisker-backend-key-pair\") pod \"whisker-749c8bf99-5hnhn\" (UID: \"43eecc48-4e9e-429e-8243-803259cf177c\") " pod="calico-system/whisker-749c8bf99-5hnhn" Jan 23 05:39:52.499386 kubelet[2784]: I0123 05:39:52.499334 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srbz\" (UniqueName: \"kubernetes.io/projected/43eecc48-4e9e-429e-8243-803259cf177c-kube-api-access-2srbz\") pod \"whisker-749c8bf99-5hnhn\" (UID: \"43eecc48-4e9e-429e-8243-803259cf177c\") " pod="calico-system/whisker-749c8bf99-5hnhn" Jan 23 05:39:52.499386 kubelet[2784]: I0123 05:39:52.499383 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43eecc48-4e9e-429e-8243-803259cf177c-whisker-ca-bundle\") pod \"whisker-749c8bf99-5hnhn\" (UID: \"43eecc48-4e9e-429e-8243-803259cf177c\") " pod="calico-system/whisker-749c8bf99-5hnhn" Jan 23 05:39:52.722860 containerd[1597]: time="2026-01-23T05:39:52.722484162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-749c8bf99-5hnhn,Uid:43eecc48-4e9e-429e-8243-803259cf177c,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:52.992188 systemd-networkd[1506]: cali46202bc00b0: Link UP Jan 23 05:39:52.993168 systemd-networkd[1506]: cali46202bc00b0: Gained carrier Jan 23 05:39:53.013499 containerd[1597]: 2026-01-23 05:39:52.832 [INFO][3975] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 05:39:53.013499 containerd[1597]: 2026-01-23 05:39:52.852 [INFO][3975] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--749c8bf99--5hnhn-eth0 whisker-749c8bf99- calico-system 43eecc48-4e9e-429e-8243-803259cf177c 963 0 2026-01-23 05:39:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:749c8bf99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-749c8bf99-5hnhn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali46202bc00b0 [] [] }} ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-" Jan 23 05:39:53.013499 containerd[1597]: 2026-01-23 05:39:52.852 [INFO][3975] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.013499 containerd[1597]: 2026-01-23 05:39:52.932 [INFO][3989] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" HandleID="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Workload="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.933 [INFO][3989] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" HandleID="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Workload="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005261d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-749c8bf99-5hnhn", "timestamp":"2026-01-23 05:39:52.932893057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.933 [INFO][3989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.934 [INFO][3989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.934 [INFO][3989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.943 [INFO][3989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" host="localhost" Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.951 [INFO][3989] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.957 [INFO][3989] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.959 [INFO][3989] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.961 [INFO][3989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:53.013767 containerd[1597]: 2026-01-23 05:39:52.961 [INFO][3989] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" host="localhost" Jan 23 05:39:53.013994 containerd[1597]: 2026-01-23 05:39:52.963 [INFO][3989] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e Jan 23 05:39:53.013994 containerd[1597]: 2026-01-23 05:39:52.967 [INFO][3989] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" host="localhost" Jan 23 05:39:53.013994 containerd[1597]: 2026-01-23 05:39:52.978 [INFO][3989] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" host="localhost" Jan 23 05:39:53.013994 containerd[1597]: 2026-01-23 05:39:52.979 [INFO][3989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" host="localhost" Jan 23 05:39:53.013994 containerd[1597]: 2026-01-23 05:39:52.979 [INFO][3989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:39:53.013994 containerd[1597]: 2026-01-23 05:39:52.979 [INFO][3989] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" HandleID="k8s-pod-network.3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Workload="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.014268 containerd[1597]: 2026-01-23 05:39:52.981 [INFO][3975] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--749c8bf99--5hnhn-eth0", GenerateName:"whisker-749c8bf99-", Namespace:"calico-system", SelfLink:"", UID:"43eecc48-4e9e-429e-8243-803259cf177c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"749c8bf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-749c8bf99-5hnhn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali46202bc00b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:53.014268 containerd[1597]: 2026-01-23 05:39:52.982 [INFO][3975] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.014411 containerd[1597]: 2026-01-23 05:39:52.982 [INFO][3975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46202bc00b0 ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.014411 containerd[1597]: 2026-01-23 05:39:52.992 [INFO][3975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.014450 containerd[1597]: 2026-01-23 05:39:52.993 [INFO][3975] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--749c8bf99--5hnhn-eth0", GenerateName:"whisker-749c8bf99-", Namespace:"calico-system", SelfLink:"", UID:"43eecc48-4e9e-429e-8243-803259cf177c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"749c8bf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e", Pod:"whisker-749c8bf99-5hnhn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali46202bc00b0", MAC:"3e:78:a5:61:6d:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:53.014515 containerd[1597]: 2026-01-23 05:39:53.010 [INFO][3975] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" Namespace="calico-system" Pod="whisker-749c8bf99-5hnhn" WorkloadEndpoint="localhost-k8s-whisker--749c8bf99--5hnhn-eth0" Jan 23 05:39:53.111818 containerd[1597]: time="2026-01-23T05:39:53.111157368Z" level=info msg="connecting to shim 3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e" address="unix:///run/containerd/s/308df96602270959025ce38aaf902e8786392b50f43d526c39d7ffde8945fb26" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:53.173984 systemd[1]: Started cri-containerd-3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e.scope - libcontainer container 3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e. Jan 23 05:39:53.223000 audit: BPF prog-id=175 op=LOAD Jan 23 05:39:53.224000 audit: BPF prog-id=176 op=LOAD Jan 23 05:39:53.224000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.224000 audit: BPF prog-id=176 op=UNLOAD Jan 23 05:39:53.224000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.225000 audit: BPF prog-id=177 op=LOAD Jan 23 05:39:53.225000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.225000 audit: BPF prog-id=178 op=LOAD Jan 23 05:39:53.225000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.225000 audit: BPF prog-id=178 op=UNLOAD Jan 23 05:39:53.225000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.225000 audit: BPF prog-id=177 op=UNLOAD Jan 23 05:39:53.225000 audit[4115]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.225000 audit: BPF prog-id=179 op=LOAD Jan 23 05:39:53.225000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4093 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365373334626130373332633236613834356130383433643062376262 Jan 23 05:39:53.228286 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:39:53.319993 containerd[1597]: time="2026-01-23T05:39:53.319582952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-749c8bf99-5hnhn,Uid:43eecc48-4e9e-429e-8243-803259cf177c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e734ba0732c26a845a0843d0b7bbf069a1c8a4e802b76c11fd92dc148a0d49e\"" Jan 23 05:39:53.320186 kubelet[2784]: I0123 05:39:53.319814 2784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 05:39:53.322739 kubelet[2784]: E0123 05:39:53.322717 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:53.326011 containerd[1597]: time="2026-01-23T05:39:53.325733537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 05:39:53.381414 containerd[1597]: time="2026-01-23T05:39:53.381299830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:39:53.382875 containerd[1597]: time="2026-01-23T05:39:53.382825187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 05:39:53.383042 containerd[1597]: time="2026-01-23T05:39:53.382904906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 05:39:53.383464 kubelet[2784]: E0123 05:39:53.383380 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:39:53.383523 kubelet[2784]: E0123 05:39:53.383477 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:39:53.387818 kubelet[2784]: E0123 05:39:53.387718 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2155c05c8ee142bb8990bf0ae2991b80,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 05:39:53.392375 containerd[1597]: time="2026-01-23T05:39:53.392144333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 05:39:53.402000 audit: BPF prog-id=180 op=LOAD Jan 23 05:39:53.402000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc08b4c140 a2=98 a3=1fffffffffffffff items=0 ppid=4015 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.402000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 05:39:53.402000 audit: BPF prog-id=180 op=UNLOAD Jan 23 05:39:53.402000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc08b4c110 a3=0 items=0 ppid=4015 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.402000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 05:39:53.402000 audit: BPF prog-id=181 op=LOAD Jan 23 05:39:53.402000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc08b4c020 a2=94 a3=3 items=0 ppid=4015 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.402000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 05:39:53.402000 audit: BPF prog-id=181 op=UNLOAD Jan 23 05:39:53.402000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc08b4c020 a2=94 a3=3 items=0 ppid=4015 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.402000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 05:39:53.402000 audit: BPF prog-id=182 op=LOAD Jan 23 05:39:53.402000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc08b4c060 a2=94 a3=7ffc08b4c240 items=0 ppid=4015 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.402000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 05:39:53.403000 audit: BPF prog-id=182 op=UNLOAD Jan 23 05:39:53.403000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc08b4c060 a2=94 a3=7ffc08b4c240 items=0 ppid=4015 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.403000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 05:39:53.404000 audit: BPF prog-id=183 op=LOAD Jan 23 05:39:53.404000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc261497e0 a2=98 a3=3 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.404000 audit: BPF prog-id=183 op=UNLOAD Jan 23 05:39:53.404000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc261497b0 a3=0 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.404000 audit: BPF prog-id=184 op=LOAD Jan 23 05:39:53.404000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc261495d0 a2=94 a3=54428f items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.404000 audit: BPF prog-id=184 op=UNLOAD Jan 23 05:39:53.404000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc261495d0 a2=94 a3=54428f items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.404000 audit: BPF prog-id=185 op=LOAD Jan 23 05:39:53.404000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc26149600 a2=94 a3=2 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.404000 audit: BPF prog-id=185 op=UNLOAD Jan 23 05:39:53.404000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc26149600 a2=0 a3=2 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.404000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.450494 containerd[1597]: time="2026-01-23T05:39:53.450432799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:39:53.451836 containerd[1597]: time="2026-01-23T05:39:53.451786097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 05:39:53.451920 containerd[1597]: time="2026-01-23T05:39:53.451879180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 05:39:53.452121 kubelet[2784]: E0123 05:39:53.452028 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:39:53.452198 kubelet[2784]: E0123 05:39:53.452137 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:39:53.452369 kubelet[2784]: E0123 05:39:53.452267 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 05:39:53.453584 kubelet[2784]: E0123 05:39:53.453525 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:39:53.572000 audit: BPF prog-id=186 op=LOAD Jan 23 05:39:53.572000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc261494c0 a2=94 a3=1 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.572000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.572000 audit: BPF prog-id=186 op=UNLOAD Jan 23 05:39:53.572000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc261494c0 a2=94 a3=1 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.572000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.585000 audit: BPF prog-id=187 op=LOAD Jan 23 05:39:53.585000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc261494b0 a2=94 a3=4 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.585000 audit: BPF prog-id=187 op=UNLOAD Jan 23 05:39:53.585000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc261494b0 a2=0 a3=4 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.585000 audit: BPF prog-id=188 op=LOAD Jan 23 05:39:53.585000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc26149310 a2=94 a3=5 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.585000 audit: BPF prog-id=188 op=UNLOAD Jan 23 05:39:53.585000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc26149310 a2=0 a3=5 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.585000 audit: BPF prog-id=189 op=LOAD Jan 23 05:39:53.585000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc26149530 a2=94 a3=6 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.586000 audit: BPF prog-id=189 op=UNLOAD Jan 23 05:39:53.586000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc26149530 a2=0 a3=6 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.586000 audit: BPF prog-id=190 op=LOAD Jan 23 05:39:53.586000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc26148ce0 a2=94 a3=88 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.586000 audit: BPF prog-id=191 op=LOAD Jan 23 05:39:53.586000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc26148b60 a2=94 a3=2 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.586000 audit: BPF prog-id=191 op=UNLOAD Jan 23 05:39:53.586000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc26148b90 a2=0 a3=7ffc26148c90 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.587000 audit: BPF prog-id=190 op=UNLOAD Jan 23 05:39:53.587000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=32e2ed10 a2=0 a3=29d233b9dd927a27 items=0 ppid=4015 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.587000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 05:39:53.597000 audit: BPF prog-id=192 op=LOAD Jan 23 05:39:53.597000 audit[4183]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81f51ff0 a2=98 a3=1999999999999999 items=0 ppid=4015 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 05:39:53.597000 audit: BPF prog-id=192 op=UNLOAD Jan 23 05:39:53.597000 audit[4183]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff81f51fc0 a3=0 items=0 ppid=4015 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 05:39:53.597000 audit: BPF prog-id=193 op=LOAD Jan 23 05:39:53.597000 audit[4183]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81f51ed0 a2=94 a3=ffff items=0 ppid=4015 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 05:39:53.598000 audit: BPF prog-id=193 op=UNLOAD Jan 23 05:39:53.598000 audit[4183]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff81f51ed0 a2=94 a3=ffff items=0 ppid=4015 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.598000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 05:39:53.598000 audit: BPF prog-id=194 op=LOAD Jan 23 05:39:53.598000 audit[4183]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81f51f10 a2=94 a3=7fff81f520f0 items=0 ppid=4015 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.598000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 05:39:53.598000 audit: BPF prog-id=194 op=UNLOAD Jan 23 05:39:53.598000 audit[4183]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff81f51f10 a2=94 a3=7fff81f520f0 items=0 ppid=4015 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.598000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 05:39:53.687720 systemd-networkd[1506]: vxlan.calico: Link UP Jan 23 05:39:53.687732 systemd-networkd[1506]: vxlan.calico: Gained carrier Jan 23 05:39:53.705000 audit: BPF prog-id=195 op=LOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe29f829c0 a2=98 a3=0 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=195 op=UNLOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe29f82990 a3=0 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=196 op=LOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe29f827d0 a2=94 a3=54428f items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=196 op=UNLOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe29f827d0 a2=94 a3=54428f items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=197 op=LOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe29f82800 a2=94 a3=2 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=197 op=UNLOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe29f82800 a2=0 a3=2 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=198 op=LOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe29f825b0 a2=94 a3=4 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=198 op=UNLOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe29f825b0 a2=94 a3=4 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=199 op=LOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe29f826b0 a2=94 a3=7ffe29f82830 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.705000 audit: BPF prog-id=199 op=UNLOAD Jan 23 05:39:53.705000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe29f826b0 a2=0 a3=7ffe29f82830 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.706000 audit: BPF prog-id=200 op=LOAD Jan 23 05:39:53.706000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe29f81de0 a2=94 a3=2 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.706000 audit: BPF prog-id=200 op=UNLOAD Jan 23 05:39:53.706000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe29f81de0 a2=0 a3=2 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.706000 audit: BPF prog-id=201 op=LOAD Jan 23 05:39:53.706000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe29f81ee0 a2=94 a3=30 items=0 ppid=4015 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 05:39:53.715000 audit: BPF prog-id=202 op=LOAD Jan 23 05:39:53.715000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0bb66060 a2=98 a3=0 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.715000 audit: BPF prog-id=202 op=UNLOAD Jan 23 05:39:53.715000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0bb66030 a3=0 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.715000 audit: BPF prog-id=203 op=LOAD Jan 23 05:39:53.715000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0bb65e50 a2=94 a3=54428f items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.716000 audit: BPF prog-id=203 op=UNLOAD Jan 23 05:39:53.716000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0bb65e50 a2=94 a3=54428f items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.716000 audit: BPF prog-id=204 op=LOAD Jan 23 05:39:53.716000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0bb65e80 a2=94 a3=2 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.716000 audit: BPF prog-id=204 op=UNLOAD Jan 23 05:39:53.716000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0bb65e80 a2=0 a3=2 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.895000 audit: BPF prog-id=205 op=LOAD Jan 23 05:39:53.895000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0bb65d40 a2=94 a3=1 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.895000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.896000 audit: BPF prog-id=205 op=UNLOAD Jan 23 05:39:53.896000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0bb65d40 a2=94 a3=1 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.896000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.905000 audit: BPF prog-id=206 op=LOAD Jan 23 05:39:53.905000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0bb65d30 a2=94 a3=4 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.905000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.905000 audit: BPF prog-id=206 op=UNLOAD Jan 23 05:39:53.905000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0bb65d30 a2=0 a3=4 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.905000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.906000 audit: BPF prog-id=207 op=LOAD Jan 23 05:39:53.906000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0bb65b90 a2=94 a3=5 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.906000 audit: BPF prog-id=207 op=UNLOAD Jan 23 05:39:53.906000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0bb65b90 a2=0 a3=5 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.906000 audit: BPF prog-id=208 op=LOAD Jan 23 05:39:53.906000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0bb65db0 a2=94 a3=6 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.906000 audit: BPF prog-id=208 op=UNLOAD Jan 23 05:39:53.906000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0bb65db0 a2=0 a3=6 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.906000 audit: BPF prog-id=209 op=LOAD Jan 23 05:39:53.906000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0bb65560 a2=94 a3=88 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.906000 audit: BPF prog-id=210 op=LOAD Jan 23 05:39:53.906000 audit[4215]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff0bb653e0 a2=94 a3=2 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.906000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.907000 audit: BPF prog-id=210 op=UNLOAD Jan 23 05:39:53.907000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff0bb65410 a2=0 a3=7fff0bb65510 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.907000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.907000 audit: BPF prog-id=209 op=UNLOAD Jan 23 05:39:53.907000 audit[4215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=23eddd10 a2=0 a3=17a46a7772117b70 items=0 ppid=4015 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.907000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 05:39:53.917000 audit: BPF prog-id=201 op=UNLOAD Jan 23 05:39:53.917000 audit[4015]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00007ca40 a2=0 a3=0 items=0 ppid=4007 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.917000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 05:39:53.978000 audit[4240]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4240 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:53.978000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe52ad9ed0 a2=0 a3=7ffe52ad9ebc items=0 ppid=4015 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.978000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:53.978000 audit[4241]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:53.978000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcfa610040 a2=0 a3=7ffcfa61002c items=0 ppid=4015 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.978000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:53.985000 audit[4239]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:53.985000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcf8edb900 a2=0 a3=7ffcf8edb8ec items=0 ppid=4015 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.985000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:53.995629 kubelet[2784]: I0123 05:39:53.995565 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e779d4-f949-4480-8be6-1410e9d1d223" path="/var/lib/kubelet/pods/d0e779d4-f949-4480-8be6-1410e9d1d223/volumes" Jan 23 05:39:53.990000 audit[4243]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:53.990000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdcad2bcb0 a2=0 a3=7ffdcad2bc9c items=0 ppid=4015 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:53.990000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:54.328103 kubelet[2784]: E0123 05:39:54.328016 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:39:54.356000 audit[4254]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:54.356000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd85adb60 a2=0 a3=7ffdd85adb4c items=0 ppid=2961 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:54.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:54.362000 audit[4254]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:54.362000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd85adb60 a2=0 a3=0 items=0 ppid=2961 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:54.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:54.771541 systemd-networkd[1506]: cali46202bc00b0: Gained IPv6LL Jan 23 05:39:54.963320 systemd-networkd[1506]: vxlan.calico: Gained IPv6LL Jan 23 05:39:55.078128 kubelet[2784]: I0123 05:39:55.077951 2784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 05:39:55.078606 kubelet[2784]: E0123 05:39:55.078494 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:55.331206 kubelet[2784]: E0123 05:39:55.330872 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:55.332985 kubelet[2784]: E0123 05:39:55.332908 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:39:56.993396 containerd[1597]: time="2026-01-23T05:39:56.993311342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6986698974-zhprj,Uid:3cdbf9fd-0ae3-408e-ba62-8b7474385dec,Namespace:calico-system,Attempt:0,}" Jan 23 05:39:56.993396 containerd[1597]: time="2026-01-23T05:39:56.993396660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-dhm59,Uid:9393f034-f86c-4875-9435-8f85b0225d78,Namespace:calico-apiserver,Attempt:0,}" Jan 23 05:39:57.160616 systemd-networkd[1506]: calia2a7b8af3e8: Link UP Jan 23 05:39:57.160942 systemd-networkd[1506]: calia2a7b8af3e8: Gained carrier Jan 23 05:39:57.198955 containerd[1597]: 2026-01-23 05:39:57.062 [INFO][4311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0 calico-kube-controllers-6986698974- calico-system 3cdbf9fd-0ae3-408e-ba62-8b7474385dec 889 0 2026-01-23 05:39:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6986698974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6986698974-zhprj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia2a7b8af3e8 [] [] }} ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-" Jan 23 05:39:57.198955 containerd[1597]: 2026-01-23 05:39:57.062 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.198955 containerd[1597]: 2026-01-23 05:39:57.109 [INFO][4345] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" HandleID="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Workload="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.109 [INFO][4345] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" HandleID="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Workload="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6986698974-zhprj", "timestamp":"2026-01-23 05:39:57.109630032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.109 [INFO][4345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.109 [INFO][4345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.109 [INFO][4345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.118 [INFO][4345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" host="localhost" Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.126 [INFO][4345] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.133 [INFO][4345] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.135 [INFO][4345] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.138 [INFO][4345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:57.199206 containerd[1597]: 2026-01-23 05:39:57.138 [INFO][4345] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" host="localhost" Jan 23 05:39:57.199491 containerd[1597]: 2026-01-23 05:39:57.140 [INFO][4345] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060 Jan 23 05:39:57.199491 containerd[1597]: 2026-01-23 05:39:57.144 [INFO][4345] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" host="localhost" Jan 23 05:39:57.199491 containerd[1597]: 2026-01-23 05:39:57.151 [INFO][4345] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" host="localhost" Jan 23 05:39:57.199491 containerd[1597]: 2026-01-23 05:39:57.151 [INFO][4345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" host="localhost" Jan 23 05:39:57.199491 containerd[1597]: 2026-01-23 05:39:57.152 [INFO][4345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:39:57.199491 containerd[1597]: 2026-01-23 05:39:57.152 [INFO][4345] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" HandleID="k8s-pod-network.a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Workload="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.199602 containerd[1597]: 2026-01-23 05:39:57.155 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0", GenerateName:"calico-kube-controllers-6986698974-", Namespace:"calico-system", SelfLink:"", UID:"3cdbf9fd-0ae3-408e-ba62-8b7474385dec", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6986698974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6986698974-zhprj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2a7b8af3e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:57.199711 containerd[1597]: 2026-01-23 05:39:57.155 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.199711 containerd[1597]: 2026-01-23 05:39:57.155 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2a7b8af3e8 ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.199711 containerd[1597]: 2026-01-23 05:39:57.158 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.199781 containerd[1597]: 2026-01-23 05:39:57.159 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0", GenerateName:"calico-kube-controllers-6986698974-", Namespace:"calico-system", SelfLink:"", UID:"3cdbf9fd-0ae3-408e-ba62-8b7474385dec", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6986698974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060", Pod:"calico-kube-controllers-6986698974-zhprj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2a7b8af3e8", MAC:"76:b9:5e:8a:2b:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:57.199849 containerd[1597]: 2026-01-23 05:39:57.179 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" Namespace="calico-system" Pod="calico-kube-controllers-6986698974-zhprj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6986698974--zhprj-eth0" Jan 23 05:39:57.211000 audit[4368]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:57.215839 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 23 05:39:57.215946 kernel: audit: type=1325 audit(1769146797.211:648): table=filter:125 family=2 entries=36 op=nft_register_chain pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:57.211000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffda9f7c6b0 a2=0 a3=7ffda9f7c69c items=0 ppid=4015 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.232383 containerd[1597]: time="2026-01-23T05:39:57.232242216Z" level=info msg="connecting to shim a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060" address="unix:///run/containerd/s/f235a6affef5f9296490ad0b0787bda4f422fbd09cbce9db579a2d9ee5c23d7a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:57.236536 kernel: audit: type=1300 audit(1769146797.211:648): arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffda9f7c6b0 a2=0 a3=7ffda9f7c69c items=0 ppid=4015 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.236803 kernel: audit: type=1327 audit(1769146797.211:648): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:57.211000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:57.299033 systemd[1]: Started cri-containerd-a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060.scope - libcontainer container a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060. Jan 23 05:39:57.326599 systemd-networkd[1506]: cali1bb8fcccdcf: Link UP Jan 23 05:39:57.327398 systemd-networkd[1506]: cali1bb8fcccdcf: Gained carrier Jan 23 05:39:57.337000 audit: BPF prog-id=211 op=LOAD Jan 23 05:39:57.342202 kernel: audit: type=1334 audit(1769146797.337:649): prog-id=211 op=LOAD Jan 23 05:39:57.341000 audit: BPF prog-id=212 op=LOAD Jan 23 05:39:57.341000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.346558 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:39:57.353780 containerd[1597]: 2026-01-23 05:39:57.055 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0 calico-apiserver-68f956777f- calico-apiserver 9393f034-f86c-4875-9435-8f85b0225d78 884 0 2026-01-23 05:39:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f956777f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68f956777f-dhm59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1bb8fcccdcf [] [] }} ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-" Jan 23 05:39:57.353780 containerd[1597]: 2026-01-23 05:39:57.056 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.353780 containerd[1597]: 2026-01-23 05:39:57.113 [INFO][4340] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" HandleID="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Workload="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.113 [INFO][4340] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" HandleID="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Workload="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68f956777f-dhm59", "timestamp":"2026-01-23 05:39:57.113253111 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.113 [INFO][4340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.152 [INFO][4340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.152 [INFO][4340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.224 [INFO][4340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" host="localhost" Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.248 [INFO][4340] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.257 [INFO][4340] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.260 [INFO][4340] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.267 [INFO][4340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:57.353990 containerd[1597]: 2026-01-23 05:39:57.267 [INFO][4340] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" host="localhost" Jan 23 05:39:57.354528 containerd[1597]: 2026-01-23 05:39:57.275 [INFO][4340] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df Jan 23 05:39:57.354528 containerd[1597]: 2026-01-23 05:39:57.298 [INFO][4340] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" host="localhost" Jan 23 05:39:57.354528 containerd[1597]: 2026-01-23 05:39:57.318 [INFO][4340] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" host="localhost" Jan 23 05:39:57.354528 containerd[1597]: 2026-01-23 05:39:57.318 [INFO][4340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" host="localhost" Jan 23 05:39:57.354528 containerd[1597]: 2026-01-23 05:39:57.319 [INFO][4340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:39:57.354528 containerd[1597]: 2026-01-23 05:39:57.319 [INFO][4340] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" HandleID="k8s-pod-network.2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Workload="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.354762 containerd[1597]: 2026-01-23 05:39:57.322 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0", GenerateName:"calico-apiserver-68f956777f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9393f034-f86c-4875-9435-8f85b0225d78", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f956777f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68f956777f-dhm59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bb8fcccdcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:57.354875 containerd[1597]: 2026-01-23 05:39:57.322 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.354875 containerd[1597]: 2026-01-23 05:39:57.322 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bb8fcccdcf ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.354875 containerd[1597]: 2026-01-23 05:39:57.327 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.354993 containerd[1597]: 2026-01-23 05:39:57.328 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0", GenerateName:"calico-apiserver-68f956777f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9393f034-f86c-4875-9435-8f85b0225d78", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f956777f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df", Pod:"calico-apiserver-68f956777f-dhm59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bb8fcccdcf", MAC:"72:69:2e:ee:62:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:57.355173 containerd[1597]: 2026-01-23 05:39:57.340 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-dhm59" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--dhm59-eth0" Jan 23 05:39:57.356145 kernel: audit: type=1334 audit(1769146797.341:650): prog-id=212 op=LOAD Jan 23 05:39:57.356214 kernel: audit: type=1300 audit(1769146797.341:650): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.369505 kernel: audit: type=1327 audit(1769146797.341:650): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.341000 audit: BPF prog-id=212 op=UNLOAD Jan 23 05:39:57.375003 kernel: audit: type=1334 audit(1769146797.341:651): prog-id=212 op=UNLOAD Jan 23 05:39:57.375225 kernel: audit: type=1300 audit(1769146797.341:651): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.341000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.395256 kernel: audit: type=1327 audit(1769146797.341:651): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.341000 audit: BPF prog-id=213 op=LOAD Jan 23 05:39:57.341000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.341000 audit: BPF prog-id=214 op=LOAD Jan 23 05:39:57.341000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.342000 audit: BPF prog-id=214 op=UNLOAD Jan 23 05:39:57.342000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.342000 audit: BPF prog-id=213 op=UNLOAD Jan 23 05:39:57.342000 audit[4389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.342000 audit: BPF prog-id=215 op=LOAD Jan 23 05:39:57.342000 audit[4389]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4378 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130633161313264323832363137646136366136306235623633626365 Jan 23 05:39:57.377000 audit[4420]: NETFILTER_CFG table=filter:126 family=2 entries=54 op=nft_register_chain pid=4420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:57.377000 audit[4420]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7fffc285ff80 a2=0 a3=7fffc285ff6c items=0 ppid=4015 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.377000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:57.417486 containerd[1597]: time="2026-01-23T05:39:57.417410907Z" level=info msg="connecting to shim 2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df" address="unix:///run/containerd/s/a1ce537dea77cad34de6fca4456d81a45eeaf8c1bf360682eee508e689a67566" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:57.440749 containerd[1597]: time="2026-01-23T05:39:57.440589892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6986698974-zhprj,Uid:3cdbf9fd-0ae3-408e-ba62-8b7474385dec,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0c1a12d282617da66a60b5b63bce42298f59b3a75137655ebadbfd63ab77060\"" Jan 23 05:39:57.446220 containerd[1597]: time="2026-01-23T05:39:57.446146624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 05:39:57.476390 systemd[1]: Started cri-containerd-2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df.scope - libcontainer container 2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df. Jan 23 05:39:57.496000 audit: BPF prog-id=216 op=LOAD Jan 23 05:39:57.497000 audit: BPF prog-id=217 op=LOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.497000 audit: BPF prog-id=217 op=UNLOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.497000 audit: BPF prog-id=218 op=LOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.497000 audit: BPF prog-id=219 op=LOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.497000 audit: BPF prog-id=219 op=UNLOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.497000 audit: BPF prog-id=218 op=UNLOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.497000 audit: BPF prog-id=220 op=LOAD Jan 23 05:39:57.497000 audit[4448]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4430 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262363664626532366433623432396339663930393863373762353364 Jan 23 05:39:57.500253 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:39:57.517292 containerd[1597]: time="2026-01-23T05:39:57.517231435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:39:57.518751 containerd[1597]: time="2026-01-23T05:39:57.518612554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 05:39:57.518876 containerd[1597]: time="2026-01-23T05:39:57.518651068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 05:39:57.519187 kubelet[2784]: E0123 05:39:57.519099 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:39:57.519187 kubelet[2784]: E0123 05:39:57.519169 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:39:57.519516 kubelet[2784]: E0123 05:39:57.519289 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-686jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6986698974-zhprj_calico-system(3cdbf9fd-0ae3-408e-ba62-8b7474385dec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 05:39:57.520844 kubelet[2784]: E0123 05:39:57.520789 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:39:57.546606 containerd[1597]: time="2026-01-23T05:39:57.546507348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-dhm59,Uid:9393f034-f86c-4875-9435-8f85b0225d78,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b66dbe26d3b429c9f9098c77b53dc5cb1f14a3edd203156100c29223de1e9df\"" Jan 23 05:39:57.548536 containerd[1597]: time="2026-01-23T05:39:57.548491692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:39:57.608774 containerd[1597]: time="2026-01-23T05:39:57.608573065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:39:57.610172 containerd[1597]: time="2026-01-23T05:39:57.610122659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:39:57.610299 containerd[1597]: time="2026-01-23T05:39:57.610219840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:39:57.610644 kubelet[2784]: E0123 05:39:57.610570 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:39:57.610748 kubelet[2784]: E0123 05:39:57.610648 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:39:57.611138 kubelet[2784]: E0123 05:39:57.610890 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhcw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-dhm59_calico-apiserver(9393f034-f86c-4875-9435-8f85b0225d78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:39:57.612401 kubelet[2784]: E0123 05:39:57.612364 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:39:57.993002 kubelet[2784]: E0123 05:39:57.992411 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:57.994532 containerd[1597]: time="2026-01-23T05:39:57.994477762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mdl4l,Uid:a6ad78ea-276e-4c01-82bf-ff4fc9289c99,Namespace:kube-system,Attempt:0,}" Jan 23 05:39:58.146978 systemd-networkd[1506]: calib72ca73d016: Link UP Jan 23 05:39:58.148343 systemd-networkd[1506]: calib72ca73d016: Gained carrier Jan 23 05:39:58.166548 containerd[1597]: 2026-01-23 05:39:58.047 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0 coredns-668d6bf9bc- kube-system a6ad78ea-276e-4c01-82bf-ff4fc9289c99 888 0 2026-01-23 05:39:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mdl4l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib72ca73d016 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-" Jan 23 05:39:58.166548 containerd[1597]: 2026-01-23 05:39:58.047 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.166548 containerd[1597]: 2026-01-23 05:39:58.092 [INFO][4488] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" HandleID="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Workload="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.092 [INFO][4488] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" HandleID="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Workload="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mdl4l", "timestamp":"2026-01-23 05:39:58.092543024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.092 [INFO][4488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.092 [INFO][4488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.092 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.101 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" host="localhost" Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.108 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.115 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.118 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.121 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:58.166837 containerd[1597]: 2026-01-23 05:39:58.121 [INFO][4488] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" host="localhost" Jan 23 05:39:58.167348 containerd[1597]: 2026-01-23 05:39:58.123 [INFO][4488] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530 Jan 23 05:39:58.167348 containerd[1597]: 2026-01-23 05:39:58.129 [INFO][4488] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" host="localhost" Jan 23 05:39:58.167348 containerd[1597]: 2026-01-23 05:39:58.138 [INFO][4488] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" host="localhost" Jan 23 05:39:58.167348 containerd[1597]: 2026-01-23 05:39:58.138 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" host="localhost" Jan 23 05:39:58.167348 containerd[1597]: 2026-01-23 05:39:58.138 [INFO][4488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:39:58.167348 containerd[1597]: 2026-01-23 05:39:58.138 [INFO][4488] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" HandleID="k8s-pod-network.f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Workload="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.167514 containerd[1597]: 2026-01-23 05:39:58.143 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a6ad78ea-276e-4c01-82bf-ff4fc9289c99", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mdl4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib72ca73d016", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:58.167618 containerd[1597]: 2026-01-23 05:39:58.143 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.167618 containerd[1597]: 2026-01-23 05:39:58.143 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib72ca73d016 ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.167618 containerd[1597]: 2026-01-23 05:39:58.148 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.167768 containerd[1597]: 2026-01-23 05:39:58.149 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a6ad78ea-276e-4c01-82bf-ff4fc9289c99", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530", Pod:"coredns-668d6bf9bc-mdl4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib72ca73d016", MAC:"4a:4d:68:e8:34:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:58.167768 containerd[1597]: 2026-01-23 05:39:58.162 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" Namespace="kube-system" Pod="coredns-668d6bf9bc-mdl4l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mdl4l-eth0" Jan 23 05:39:58.191000 audit[4505]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4505 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:58.191000 audit[4505]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffc17399300 a2=0 a3=7ffc173992ec items=0 ppid=4015 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.191000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:58.202226 containerd[1597]: time="2026-01-23T05:39:58.202163649Z" level=info msg="connecting to shim f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530" address="unix:///run/containerd/s/e2a9e7575cf1f083b6b16679708736d503a97a1a2011cf30c19d827a7e62ed62" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:58.240287 systemd[1]: Started cri-containerd-f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530.scope - libcontainer container f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530. Jan 23 05:39:58.258000 audit: BPF prog-id=221 op=LOAD Jan 23 05:39:58.259000 audit: BPF prog-id=222 op=LOAD Jan 23 05:39:58.259000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.259000 audit: BPF prog-id=222 op=UNLOAD Jan 23 05:39:58.259000 audit[4526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.260000 audit: BPF prog-id=223 op=LOAD Jan 23 05:39:58.260000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.261000 audit: BPF prog-id=224 op=LOAD Jan 23 05:39:58.261000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.261000 audit: BPF prog-id=224 op=UNLOAD Jan 23 05:39:58.261000 audit[4526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.262000 audit: BPF prog-id=223 op=UNLOAD Jan 23 05:39:58.262000 audit[4526]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.262000 audit: BPF prog-id=225 op=LOAD Jan 23 05:39:58.262000 audit[4526]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4514 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632623232353430643932343036643131616530306439306135306335 Jan 23 05:39:58.265362 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:39:58.310318 containerd[1597]: time="2026-01-23T05:39:58.310195285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mdl4l,Uid:a6ad78ea-276e-4c01-82bf-ff4fc9289c99,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530\"" Jan 23 05:39:58.311743 kubelet[2784]: E0123 05:39:58.311644 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:58.315361 containerd[1597]: time="2026-01-23T05:39:58.315321949Z" level=info msg="CreateContainer within sandbox \"f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 05:39:58.333984 containerd[1597]: time="2026-01-23T05:39:58.333874818Z" level=info msg="Container 9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:39:58.339929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210837651.mount: Deactivated successfully. Jan 23 05:39:58.344441 containerd[1597]: time="2026-01-23T05:39:58.344326957Z" level=info msg="CreateContainer within sandbox \"f2b22540d92406d11ae00d90a50c59403076ad7844ad46a12c8178bbb9111530\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6\"" Jan 23 05:39:58.345493 kubelet[2784]: E0123 05:39:58.345433 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:39:58.345673 containerd[1597]: time="2026-01-23T05:39:58.345601877Z" level=info msg="StartContainer for \"9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6\"" Jan 23 05:39:58.349048 containerd[1597]: time="2026-01-23T05:39:58.348897321Z" level=info msg="connecting to shim 9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6" address="unix:///run/containerd/s/e2a9e7575cf1f083b6b16679708736d503a97a1a2011cf30c19d827a7e62ed62" protocol=ttrpc version=3 Jan 23 05:39:58.350293 kubelet[2784]: E0123 05:39:58.349335 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:39:58.388490 systemd[1]: Started cri-containerd-9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6.scope - libcontainer container 9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6. Jan 23 05:39:58.402000 audit[4571]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:58.402000 audit[4571]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa0926310 a2=0 a3=7fffa09262fc items=0 ppid=2961 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:58.408000 audit[4571]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:58.408000 audit[4571]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffa0926310 a2=0 a3=0 items=0 ppid=2961 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:58.409000 audit: BPF prog-id=226 op=LOAD Jan 23 05:39:58.410000 audit: BPF prog-id=227 op=LOAD Jan 23 05:39:58.410000 audit[4551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.410000 audit: BPF prog-id=227 op=UNLOAD Jan 23 05:39:58.410000 audit[4551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.410000 audit: BPF prog-id=228 op=LOAD Jan 23 05:39:58.410000 audit[4551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.410000 audit: BPF prog-id=229 op=LOAD Jan 23 05:39:58.410000 audit[4551]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.411000 audit: BPF prog-id=229 op=UNLOAD Jan 23 05:39:58.411000 audit[4551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.411000 audit: BPF prog-id=228 op=UNLOAD Jan 23 05:39:58.411000 audit[4551]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.411000 audit: BPF prog-id=230 op=LOAD Jan 23 05:39:58.411000 audit[4551]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4514 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:58.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965353035346536363537656561373033343831373333613436636235 Jan 23 05:39:58.440039 containerd[1597]: time="2026-01-23T05:39:58.439973088Z" level=info msg="StartContainer for \"9e5054e6657eea703481733a46cb505699de5e6e172c080adc94b912a09773a6\" returns successfully" Jan 23 05:39:58.739305 systemd-networkd[1506]: calia2a7b8af3e8: Gained IPv6LL Jan 23 05:39:58.995738 containerd[1597]: time="2026-01-23T05:39:58.995460593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb784f8d9-hrfdf,Uid:3912831c-901b-4041-9115-637bb8679bc2,Namespace:calico-apiserver,Attempt:0,}" Jan 23 05:39:59.167763 systemd-networkd[1506]: cali9c01262bdbd: Link UP Jan 23 05:39:59.169353 systemd-networkd[1506]: cali9c01262bdbd: Gained carrier Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.081 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0 calico-apiserver-fb784f8d9- calico-apiserver 3912831c-901b-4041-9115-637bb8679bc2 886 0 2026-01-23 05:39:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fb784f8d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-fb784f8d9-hrfdf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c01262bdbd [] [] }} ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.082 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.120 [INFO][4610] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" HandleID="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Workload="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.120 [INFO][4610] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" HandleID="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Workload="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1b30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-fb784f8d9-hrfdf", "timestamp":"2026-01-23 05:39:59.120423607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.120 [INFO][4610] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.120 [INFO][4610] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.120 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.129 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.136 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.141 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.143 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.146 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.146 [INFO][4610] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.148 [INFO][4610] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.152 [INFO][4610] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.159 [INFO][4610] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.159 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" host="localhost" Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.159 [INFO][4610] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:39:59.193788 containerd[1597]: 2026-01-23 05:39:59.159 [INFO][4610] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" HandleID="k8s-pod-network.3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Workload="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.194405 containerd[1597]: 2026-01-23 05:39:59.163 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0", GenerateName:"calico-apiserver-fb784f8d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3912831c-901b-4041-9115-637bb8679bc2", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fb784f8d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-fb784f8d9-hrfdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c01262bdbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:59.194405 containerd[1597]: 2026-01-23 05:39:59.163 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.194405 containerd[1597]: 2026-01-23 05:39:59.163 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c01262bdbd ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.194405 containerd[1597]: 2026-01-23 05:39:59.171 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.194405 containerd[1597]: 2026-01-23 05:39:59.173 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0", GenerateName:"calico-apiserver-fb784f8d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3912831c-901b-4041-9115-637bb8679bc2", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fb784f8d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f", Pod:"calico-apiserver-fb784f8d9-hrfdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c01262bdbd", MAC:"82:95:38:0b:02:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:39:59.194405 containerd[1597]: 2026-01-23 05:39:59.188 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" Namespace="calico-apiserver" Pod="calico-apiserver-fb784f8d9-hrfdf" WorkloadEndpoint="localhost-k8s-calico--apiserver--fb784f8d9--hrfdf-eth0" Jan 23 05:39:59.219000 audit[4625]: NETFILTER_CFG table=filter:130 family=2 entries=49 op=nft_register_chain pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:39:59.219000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffca266ad90 a2=0 a3=7ffca266ad7c items=0 ppid=4015 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.219000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:39:59.228003 containerd[1597]: time="2026-01-23T05:39:59.227869620Z" level=info msg="connecting to shim 3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f" address="unix:///run/containerd/s/1d0507ad64acfc669cb52ec1edbe7d2bc5c42c1e1d91168ccde95c6e32339e13" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:39:59.255177 systemd-networkd[1506]: cali1bb8fcccdcf: Gained IPv6LL Jan 23 05:39:59.326976 systemd[1]: Started cri-containerd-3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f.scope - libcontainer container 3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f. Jan 23 05:39:59.348000 audit: BPF prog-id=231 op=LOAD Jan 23 05:39:59.349000 audit: BPF prog-id=232 op=LOAD Jan 23 05:39:59.349000 audit[4647]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.349000 audit: BPF prog-id=232 op=UNLOAD Jan 23 05:39:59.349000 audit[4647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.349000 audit: BPF prog-id=233 op=LOAD Jan 23 05:39:59.349000 audit[4647]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.349000 audit: BPF prog-id=234 op=LOAD Jan 23 05:39:59.349000 audit[4647]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.349000 audit: BPF prog-id=234 op=UNLOAD Jan 23 05:39:59.349000 audit[4647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.349000 audit: BPF prog-id=233 op=UNLOAD Jan 23 05:39:59.349000 audit[4647]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.350000 audit: BPF prog-id=235 op=LOAD Jan 23 05:39:59.350000 audit[4647]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4634 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362333562373439353165353063323563303034356433663437636635 Jan 23 05:39:59.352605 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:39:59.358895 kubelet[2784]: E0123 05:39:59.358826 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:59.363231 kubelet[2784]: E0123 05:39:59.363132 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:39:59.363729 kubelet[2784]: E0123 05:39:59.363580 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:39:59.413015 kubelet[2784]: I0123 05:39:59.412925 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mdl4l" podStartSLOduration=34.4129021 podStartE2EDuration="34.4129021s" podCreationTimestamp="2026-01-23 05:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 05:39:59.38782715 +0000 UTC m=+39.527420362" watchObservedRunningTime="2026-01-23 05:39:59.4129021 +0000 UTC m=+39.552495311" Jan 23 05:39:59.424000 audit[4669]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:59.424000 audit[4669]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe3bcaf2e0 a2=0 a3=7ffe3bcaf2cc items=0 ppid=2961 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:59.433000 audit[4669]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:59.433000 audit[4669]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe3bcaf2e0 a2=0 a3=0 items=0 ppid=2961 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:59.454776 containerd[1597]: time="2026-01-23T05:39:59.454559068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fb784f8d9-hrfdf,Uid:3912831c-901b-4041-9115-637bb8679bc2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3b35b74951e50c25c0045d3f47cf5f2792a63227e06ca6a39834a3b1e5d6c19f\"" Jan 23 05:39:59.459366 containerd[1597]: time="2026-01-23T05:39:59.459323644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:39:59.500000 audit[4676]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:59.500000 audit[4676]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe049a5e0 a2=0 a3=7fffe049a5cc items=0 ppid=2961 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:59.515000 audit[4676]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:39:59.515000 audit[4676]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffe049a5e0 a2=0 a3=7fffe049a5cc items=0 ppid=2961 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:39:59.515000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:39:59.529136 containerd[1597]: time="2026-01-23T05:39:59.528250953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:39:59.530766 containerd[1597]: time="2026-01-23T05:39:59.529886647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:39:59.530766 containerd[1597]: time="2026-01-23T05:39:59.530013434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:39:59.531604 kubelet[2784]: E0123 05:39:59.531013 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:39:59.531604 kubelet[2784]: E0123 05:39:59.531155 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:39:59.531604 kubelet[2784]: E0123 05:39:59.531383 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrpbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fb784f8d9-hrfdf_calico-apiserver(3912831c-901b-4041-9115-637bb8679bc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:39:59.533314 kubelet[2784]: E0123 05:39:59.533141 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:39:59.635413 systemd-networkd[1506]: calib72ca73d016: Gained IPv6LL Jan 23 05:39:59.992368 kubelet[2784]: E0123 05:39:59.992309 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:39:59.994762 containerd[1597]: time="2026-01-23T05:39:59.994590948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-b4hm5,Uid:03afe386-d286-45e1-b2d1-9d888b5a436b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 05:39:59.995614 containerd[1597]: time="2026-01-23T05:39:59.995509012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-47b87,Uid:055be855-4fc5-42d5-be74-896584a97ac5,Namespace:kube-system,Attempt:0,}" Jan 23 05:40:00.215655 systemd-networkd[1506]: calia8d5bca012d: Link UP Jan 23 05:40:00.217508 systemd-networkd[1506]: calia8d5bca012d: Gained carrier Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.080 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0 calico-apiserver-68f956777f- calico-apiserver 03afe386-d286-45e1-b2d1-9d888b5a436b 876 0 2026-01-23 05:39:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f956777f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68f956777f-b4hm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia8d5bca012d [] [] }} ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.083 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.137 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" HandleID="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Workload="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.137 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" HandleID="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Workload="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043be60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68f956777f-b4hm5", "timestamp":"2026-01-23 05:40:00.137723607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.138 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.138 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.138 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.145 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.154 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.162 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.165 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.168 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.168 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.173 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.185 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.192 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.192 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" host="localhost" Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.192 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:40:00.249834 containerd[1597]: 2026-01-23 05:40:00.193 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" HandleID="k8s-pod-network.83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Workload="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.251305 containerd[1597]: 2026-01-23 05:40:00.204 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0", GenerateName:"calico-apiserver-68f956777f-", Namespace:"calico-apiserver", SelfLink:"", UID:"03afe386-d286-45e1-b2d1-9d888b5a436b", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f956777f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68f956777f-b4hm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8d5bca012d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:00.251305 containerd[1597]: 2026-01-23 05:40:00.205 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.251305 containerd[1597]: 2026-01-23 05:40:00.205 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8d5bca012d ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.251305 containerd[1597]: 2026-01-23 05:40:00.217 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.251305 containerd[1597]: 2026-01-23 05:40:00.219 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0", GenerateName:"calico-apiserver-68f956777f-", Namespace:"calico-apiserver", SelfLink:"", UID:"03afe386-d286-45e1-b2d1-9d888b5a436b", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f956777f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e", Pod:"calico-apiserver-68f956777f-b4hm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8d5bca012d", MAC:"16:f6:ef:57:55:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:00.251305 containerd[1597]: 2026-01-23 05:40:00.237 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" Namespace="calico-apiserver" Pod="calico-apiserver-68f956777f-b4hm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f956777f--b4hm5-eth0" Jan 23 05:40:00.291000 audit[4731]: NETFILTER_CFG table=filter:135 family=2 entries=59 op=nft_register_chain pid=4731 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:40:00.291000 audit[4731]: SYSCALL arch=c000003e syscall=46 success=yes exit=29492 a0=3 a1=7ffc08d4aa60 a2=0 a3=7ffc08d4aa4c items=0 ppid=4015 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:40:00.329443 containerd[1597]: time="2026-01-23T05:40:00.329378037Z" level=info msg="connecting to shim 83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e" address="unix:///run/containerd/s/46f7a45c98f9c15ba1c89f29f37677cf4a4c1a225ccc8577b375c51a3008f5f2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:40:00.342228 systemd-networkd[1506]: cali055b7c23fe9: Link UP Jan 23 05:40:00.342618 systemd-networkd[1506]: cali055b7c23fe9: Gained carrier Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.098 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--47b87-eth0 coredns-668d6bf9bc- kube-system 055be855-4fc5-42d5-be74-896584a97ac5 880 0 2026-01-23 05:39:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-47b87 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali055b7c23fe9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.099 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.166 [INFO][4713] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" HandleID="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Workload="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.166 [INFO][4713] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" HandleID="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Workload="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b370), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-47b87", "timestamp":"2026-01-23 05:40:00.16626505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.166 [INFO][4713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.193 [INFO][4713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.193 [INFO][4713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.246 [INFO][4713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.257 [INFO][4713] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.281 [INFO][4713] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.287 [INFO][4713] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.294 [INFO][4713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.294 [INFO][4713] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.297 [INFO][4713] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625 Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.307 [INFO][4713] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.325 [INFO][4713] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.326 [INFO][4713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" host="localhost" Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.326 [INFO][4713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:40:00.366426 containerd[1597]: 2026-01-23 05:40:00.326 [INFO][4713] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" HandleID="k8s-pod-network.b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Workload="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.367444 containerd[1597]: 2026-01-23 05:40:00.335 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--47b87-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"055be855-4fc5-42d5-be74-896584a97ac5", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-47b87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali055b7c23fe9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:00.367444 containerd[1597]: 2026-01-23 05:40:00.335 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.367444 containerd[1597]: 2026-01-23 05:40:00.335 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali055b7c23fe9 ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.367444 containerd[1597]: 2026-01-23 05:40:00.340 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.367444 containerd[1597]: 2026-01-23 05:40:00.340 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--47b87-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"055be855-4fc5-42d5-be74-896584a97ac5", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625", Pod:"coredns-668d6bf9bc-47b87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali055b7c23fe9", MAC:"5e:49:1d:66:bd:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:00.367444 containerd[1597]: 2026-01-23 05:40:00.357 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" Namespace="kube-system" Pod="coredns-668d6bf9bc-47b87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--47b87-eth0" Jan 23 05:40:00.369219 kubelet[2784]: E0123 05:40:00.369186 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:00.371546 kubelet[2784]: E0123 05:40:00.370148 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:40:00.395510 systemd[1]: Started cri-containerd-83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e.scope - libcontainer container 83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e. Jan 23 05:40:00.420434 containerd[1597]: time="2026-01-23T05:40:00.420298594Z" level=info msg="connecting to shim b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625" address="unix:///run/containerd/s/2adff1d670706c9511e09c12938b7c189b68ba7675a3925baf6731b8a9780961" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:40:00.424000 audit[4787]: NETFILTER_CFG table=filter:136 family=2 entries=48 op=nft_register_chain pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:40:00.424000 audit[4787]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffe05430920 a2=0 a3=7ffe0543090c items=0 ppid=4015 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.424000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:40:00.462000 audit: BPF prog-id=236 op=LOAD Jan 23 05:40:00.462000 audit[4816]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=4816 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:40:00.462000 audit[4816]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff40bec610 a2=0 a3=7fff40bec5fc items=0 ppid=2961 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:40:00.464000 audit: BPF prog-id=237 op=LOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.464000 audit: BPF prog-id=237 op=UNLOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.464000 audit: BPF prog-id=238 op=LOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.464000 audit: BPF prog-id=239 op=LOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.464000 audit: BPF prog-id=239 op=UNLOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.464000 audit: BPF prog-id=238 op=UNLOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.464000 audit: BPF prog-id=240 op=LOAD Jan 23 05:40:00.464000 audit[4753]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4741 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383130636334366636343062643763373037306630396537633930 Jan 23 05:40:00.469137 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:40:00.484546 systemd[1]: Started cri-containerd-b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625.scope - libcontainer container b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625. Jan 23 05:40:00.485000 audit[4816]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=4816 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:40:00.485000 audit[4816]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff40bec610 a2=0 a3=7fff40bec5fc items=0 ppid=2961 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:40:00.515000 audit: BPF prog-id=241 op=LOAD Jan 23 05:40:00.516000 audit: BPF prog-id=242 op=LOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.516000 audit: BPF prog-id=242 op=UNLOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.516000 audit: BPF prog-id=243 op=LOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.516000 audit: BPF prog-id=244 op=LOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.516000 audit: BPF prog-id=244 op=UNLOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.516000 audit: BPF prog-id=243 op=UNLOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.516000 audit: BPF prog-id=245 op=LOAD Jan 23 05:40:00.516000 audit[4803]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4785 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238343736373632613261623334386364313637376664663834323135 Jan 23 05:40:00.519327 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:40:00.595932 containerd[1597]: time="2026-01-23T05:40:00.595882447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-47b87,Uid:055be855-4fc5-42d5-be74-896584a97ac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625\"" Jan 23 05:40:00.598419 kubelet[2784]: E0123 05:40:00.598381 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:00.603262 containerd[1597]: time="2026-01-23T05:40:00.603180599Z" level=info msg="CreateContainer within sandbox \"b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 05:40:00.614375 containerd[1597]: time="2026-01-23T05:40:00.614160162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f956777f-b4hm5,Uid:03afe386-d286-45e1-b2d1-9d888b5a436b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"83810cc46f640bd7c7070f09e7c903bc6701464fb4b7b4b56649aad2071b316e\"" Jan 23 05:40:00.617441 containerd[1597]: time="2026-01-23T05:40:00.617415900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:00.625470 containerd[1597]: time="2026-01-23T05:40:00.625384063Z" level=info msg="Container d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24: CDI devices from CRI Config.CDIDevices: []" Jan 23 05:40:00.635095 containerd[1597]: time="2026-01-23T05:40:00.635006014Z" level=info msg="CreateContainer within sandbox \"b8476762a2ab348cd1677fdf84215bf2f106e3f6c086670cf3665eabc6659625\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24\"" Jan 23 05:40:00.637365 containerd[1597]: time="2026-01-23T05:40:00.637296569Z" level=info msg="StartContainer for \"d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24\"" Jan 23 05:40:00.638738 containerd[1597]: time="2026-01-23T05:40:00.638632484Z" level=info msg="connecting to shim d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24" address="unix:///run/containerd/s/2adff1d670706c9511e09c12938b7c189b68ba7675a3925baf6731b8a9780961" protocol=ttrpc version=3 Jan 23 05:40:00.666306 systemd[1]: Started cri-containerd-d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24.scope - libcontainer container d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24. Jan 23 05:40:00.679954 containerd[1597]: time="2026-01-23T05:40:00.679010338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:00.684515 containerd[1597]: time="2026-01-23T05:40:00.683848272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:00.684515 containerd[1597]: time="2026-01-23T05:40:00.683908003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:00.684650 kubelet[2784]: E0123 05:40:00.684462 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:00.684650 kubelet[2784]: E0123 05:40:00.684536 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:00.684864 kubelet[2784]: E0123 05:40:00.684760 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwrcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-b4hm5_calico-apiserver(03afe386-d286-45e1-b2d1-9d888b5a436b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:00.685959 kubelet[2784]: E0123 05:40:00.685887 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:00.699000 audit: BPF prog-id=246 op=LOAD Jan 23 05:40:00.700000 audit: BPF prog-id=247 op=LOAD Jan 23 05:40:00.700000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.700000 audit: BPF prog-id=247 op=UNLOAD Jan 23 05:40:00.700000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.701000 audit: BPF prog-id=248 op=LOAD Jan 23 05:40:00.701000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.701000 audit: BPF prog-id=249 op=LOAD Jan 23 05:40:00.701000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.701000 audit: BPF prog-id=249 op=UNLOAD Jan 23 05:40:00.701000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.701000 audit: BPF prog-id=248 op=UNLOAD Jan 23 05:40:00.701000 audit[4837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.701000 audit: BPF prog-id=250 op=LOAD Jan 23 05:40:00.701000 audit[4837]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4785 pid=4837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:00.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431383161626265386238666562383234336265636632633365396532 Jan 23 05:40:00.734671 containerd[1597]: time="2026-01-23T05:40:00.734528968Z" level=info msg="StartContainer for \"d181abbe8b8feb8243becf2c3e9e28cacc59d6b4600f49412ac8f3f9b457de24\" returns successfully" Jan 23 05:40:00.979846 systemd-networkd[1506]: cali9c01262bdbd: Gained IPv6LL Jan 23 05:40:00.994366 containerd[1597]: time="2026-01-23T05:40:00.994244658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-56x2c,Uid:f2986fef-16f6-4f5c-ada1-3406bc086cb8,Namespace:calico-system,Attempt:0,}" Jan 23 05:40:01.204827 systemd-networkd[1506]: cali9f3a5feeb4f: Link UP Jan 23 05:40:01.206830 systemd-networkd[1506]: cali9f3a5feeb4f: Gained carrier Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.064 [INFO][4873] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--56x2c-eth0 goldmane-666569f655- calico-system f2986fef-16f6-4f5c-ada1-3406bc086cb8 883 0 2026-01-23 05:39:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-56x2c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9f3a5feeb4f [] [] }} ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.065 [INFO][4873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.128 [INFO][4887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" HandleID="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Workload="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.128 [INFO][4887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" HandleID="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Workload="localhost-k8s-goldmane--666569f655--56x2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cfea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-56x2c", "timestamp":"2026-01-23 05:40:01.128315284 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.128 [INFO][4887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.129 [INFO][4887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.129 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.138 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.147 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.155 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.157 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.161 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.162 [INFO][4887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.166 [INFO][4887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853 Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.182 [INFO][4887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.191 [INFO][4887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.192 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" host="localhost" Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.192 [INFO][4887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:40:01.229424 containerd[1597]: 2026-01-23 05:40:01.192 [INFO][4887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" HandleID="k8s-pod-network.ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Workload="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.231950 containerd[1597]: 2026-01-23 05:40:01.197 [INFO][4873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--56x2c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f2986fef-16f6-4f5c-ada1-3406bc086cb8", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-56x2c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f3a5feeb4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:01.231950 containerd[1597]: 2026-01-23 05:40:01.197 [INFO][4873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.231950 containerd[1597]: 2026-01-23 05:40:01.197 [INFO][4873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f3a5feeb4f ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.231950 containerd[1597]: 2026-01-23 05:40:01.207 [INFO][4873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.231950 containerd[1597]: 2026-01-23 05:40:01.208 [INFO][4873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--56x2c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f2986fef-16f6-4f5c-ada1-3406bc086cb8", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853", Pod:"goldmane-666569f655-56x2c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f3a5feeb4f", MAC:"96:c0:e7:c6:e0:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:01.231950 containerd[1597]: 2026-01-23 05:40:01.224 [INFO][4873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" Namespace="calico-system" Pod="goldmane-666569f655-56x2c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--56x2c-eth0" Jan 23 05:40:01.273157 containerd[1597]: time="2026-01-23T05:40:01.271953752Z" level=info msg="connecting to shim ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853" address="unix:///run/containerd/s/dfa09bfc83435213b6e96d9c349e1321cae6020cd32f84364f23d3ba7d83981b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:40:01.273000 audit[4915]: NETFILTER_CFG table=filter:139 family=2 entries=70 op=nft_register_chain pid=4915 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:40:01.273000 audit[4915]: SYSCALL arch=c000003e syscall=46 success=yes exit=33956 a0=3 a1=7ffe5082d860 a2=0 a3=7ffe5082d84c items=0 ppid=4015 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:40:01.345344 systemd[1]: Started cri-containerd-ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853.scope - libcontainer container ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853. Jan 23 05:40:01.365000 audit: BPF prog-id=251 op=LOAD Jan 23 05:40:01.366000 audit: BPF prog-id=252 op=LOAD Jan 23 05:40:01.366000 audit[4926]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.366000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.367000 audit: BPF prog-id=252 op=UNLOAD Jan 23 05:40:01.367000 audit[4926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.367000 audit: BPF prog-id=253 op=LOAD Jan 23 05:40:01.367000 audit[4926]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.368000 audit: BPF prog-id=254 op=LOAD Jan 23 05:40:01.368000 audit[4926]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.368000 audit: BPF prog-id=254 op=UNLOAD Jan 23 05:40:01.368000 audit[4926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.368000 audit: BPF prog-id=253 op=UNLOAD Jan 23 05:40:01.368000 audit[4926]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.368000 audit: BPF prog-id=255 op=LOAD Jan 23 05:40:01.368000 audit[4926]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4914 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666366138656530373230616330373938623834393066326131616634 Jan 23 05:40:01.378152 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:40:01.382994 kubelet[2784]: E0123 05:40:01.382907 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:01.386480 kubelet[2784]: E0123 05:40:01.386120 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:40:01.386480 kubelet[2784]: E0123 05:40:01.386265 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:01.388095 kubelet[2784]: E0123 05:40:01.387527 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:01.406164 kubelet[2784]: I0123 05:40:01.405044 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-47b87" podStartSLOduration=36.404767843 podStartE2EDuration="36.404767843s" podCreationTimestamp="2026-01-23 05:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 05:40:01.400822811 +0000 UTC m=+41.540416022" watchObservedRunningTime="2026-01-23 05:40:01.404767843 +0000 UTC m=+41.544361054" Jan 23 05:40:01.440861 containerd[1597]: time="2026-01-23T05:40:01.440653564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-56x2c,Uid:f2986fef-16f6-4f5c-ada1-3406bc086cb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff6a8ee0720ac0798b8490f2a1af43c6ab8e7cf8163812c8c25d2f4ddfc2a853\"" Jan 23 05:40:01.450403 containerd[1597]: time="2026-01-23T05:40:01.450320326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 05:40:01.450000 audit[4950]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:40:01.450000 audit[4950]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcaf6218e0 a2=0 a3=7ffcaf6218cc items=0 ppid=2961 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.450000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:40:01.457000 audit[4950]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=4950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:40:01.457000 audit[4950]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcaf6218e0 a2=0 a3=7ffcaf6218cc items=0 ppid=2961 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:01.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:40:01.515942 containerd[1597]: time="2026-01-23T05:40:01.515642739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:01.518247 containerd[1597]: time="2026-01-23T05:40:01.518186098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 05:40:01.518436 containerd[1597]: time="2026-01-23T05:40:01.518396371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:01.518787 kubelet[2784]: E0123 05:40:01.518660 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:40:01.518838 kubelet[2784]: E0123 05:40:01.518801 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:40:01.519200 kubelet[2784]: E0123 05:40:01.519046 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgb5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-56x2c_calico-system(f2986fef-16f6-4f5c-ada1-3406bc086cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:01.520368 kubelet[2784]: E0123 05:40:01.520319 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:01.939390 systemd-networkd[1506]: cali055b7c23fe9: Gained IPv6LL Jan 23 05:40:01.995375 containerd[1597]: time="2026-01-23T05:40:01.992543217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpddq,Uid:5e85a0d3-5d32-4d8a-b91c-0a641948fd22,Namespace:calico-system,Attempt:0,}" Jan 23 05:40:02.008178 systemd-networkd[1506]: calia8d5bca012d: Gained IPv6LL Jan 23 05:40:02.144676 systemd-networkd[1506]: cali7030f29fac5: Link UP Jan 23 05:40:02.146199 systemd-networkd[1506]: cali7030f29fac5: Gained carrier Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.052 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rpddq-eth0 csi-node-driver- calico-system 5e85a0d3-5d32-4d8a-b91c-0a641948fd22 770 0 2026-01-23 05:39:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rpddq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7030f29fac5 [] [] }} ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.053 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.088 [INFO][4966] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" HandleID="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Workload="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.089 [INFO][4966] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" HandleID="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Workload="localhost-k8s-csi--node--driver--rpddq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rpddq", "timestamp":"2026-01-23 05:40:02.088864982 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.089 [INFO][4966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.089 [INFO][4966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.089 [INFO][4966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.099 [INFO][4966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.106 [INFO][4966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.111 [INFO][4966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.115 [INFO][4966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.118 [INFO][4966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.118 [INFO][4966] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.120 [INFO][4966] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0 Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.128 [INFO][4966] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.137 [INFO][4966] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.137 [INFO][4966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" host="localhost" Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.137 [INFO][4966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 05:40:02.168658 containerd[1597]: 2026-01-23 05:40:02.138 [INFO][4966] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" HandleID="k8s-pod-network.5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Workload="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.169583 containerd[1597]: 2026-01-23 05:40:02.141 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rpddq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e85a0d3-5d32-4d8a-b91c-0a641948fd22", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rpddq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7030f29fac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:02.169583 containerd[1597]: 2026-01-23 05:40:02.141 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.169583 containerd[1597]: 2026-01-23 05:40:02.141 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7030f29fac5 ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.169583 containerd[1597]: 2026-01-23 05:40:02.146 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.169583 containerd[1597]: 2026-01-23 05:40:02.147 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rpddq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e85a0d3-5d32-4d8a-b91c-0a641948fd22", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 5, 39, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0", Pod:"csi-node-driver-rpddq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7030f29fac5", MAC:"3a:6f:9f:ce:5a:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 05:40:02.169583 containerd[1597]: 2026-01-23 05:40:02.163 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" Namespace="calico-system" Pod="csi-node-driver-rpddq" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpddq-eth0" Jan 23 05:40:02.203046 containerd[1597]: time="2026-01-23T05:40:02.202901810Z" level=info msg="connecting to shim 5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0" address="unix:///run/containerd/s/1f932905895445f22023935c674d2ff64d48a9f66436628072b0c835dd55247c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 05:40:02.203000 audit[4990]: NETFILTER_CFG table=filter:142 family=2 entries=56 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 05:40:02.203000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=25484 a0=3 a1=7ffdb2d75a10 a2=0 a3=7ffdb2d759fc items=0 ppid=4015 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.203000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 05:40:02.250435 systemd[1]: Started cri-containerd-5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0.scope - libcontainer container 5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0. Jan 23 05:40:02.270000 audit: BPF prog-id=256 op=LOAD Jan 23 05:40:02.280125 kernel: kauditd_printk_skb: 242 callbacks suppressed Jan 23 05:40:02.280221 kernel: audit: type=1334 audit(1769146802.270:738): prog-id=256 op=LOAD Jan 23 05:40:02.275000 audit: BPF prog-id=257 op=LOAD Jan 23 05:40:02.282844 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 05:40:02.285642 kernel: audit: type=1334 audit(1769146802.275:739): prog-id=257 op=LOAD Jan 23 05:40:02.275000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.297882 kernel: audit: type=1300 audit(1769146802.275:739): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.298140 kernel: audit: type=1327 audit(1769146802.275:739): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.275000 audit: BPF prog-id=257 op=UNLOAD Jan 23 05:40:02.313121 kernel: audit: type=1334 audit(1769146802.275:740): prog-id=257 op=UNLOAD Jan 23 05:40:02.313641 kernel: audit: type=1300 audit(1769146802.275:740): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.275000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.326881 kernel: audit: type=1327 audit(1769146802.275:740): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.275000 audit: BPF prog-id=258 op=LOAD Jan 23 05:40:02.342916 kernel: audit: type=1334 audit(1769146802.275:741): prog-id=258 op=LOAD Jan 23 05:40:02.342985 kernel: audit: type=1300 audit(1769146802.275:741): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.275000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.343808 containerd[1597]: time="2026-01-23T05:40:02.343748069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpddq,Uid:5e85a0d3-5d32-4d8a-b91c-0a641948fd22,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bc7d2dca59a86201db01b267d98e40532d4556feea53b315564c71ec7bf4bc0\"" Jan 23 05:40:02.346773 containerd[1597]: time="2026-01-23T05:40:02.346686238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 05:40:02.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.368419 kernel: audit: type=1327 audit(1769146802.275:741): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.276000 audit: BPF prog-id=259 op=LOAD Jan 23 05:40:02.276000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.276000 audit: BPF prog-id=259 op=UNLOAD Jan 23 05:40:02.276000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.276000 audit: BPF prog-id=258 op=UNLOAD Jan 23 05:40:02.276000 audit[5003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.276000 audit: BPF prog-id=260 op=LOAD Jan 23 05:40:02.276000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4992 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562633764326463613539613836323031646230316232363764393865 Jan 23 05:40:02.391039 kubelet[2784]: E0123 05:40:02.390918 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:02.392573 kubelet[2784]: E0123 05:40:02.392014 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:02.392573 kubelet[2784]: E0123 05:40:02.392505 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:02.427544 containerd[1597]: time="2026-01-23T05:40:02.427399989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:02.428903 containerd[1597]: time="2026-01-23T05:40:02.428790589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 05:40:02.429302 containerd[1597]: time="2026-01-23T05:40:02.428902428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:02.429780 kubelet[2784]: E0123 05:40:02.429143 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:40:02.429780 kubelet[2784]: E0123 05:40:02.429196 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:40:02.429780 kubelet[2784]: E0123 05:40:02.429342 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:02.432115 containerd[1597]: time="2026-01-23T05:40:02.432022823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 05:40:02.442000 audit[5029]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:40:02.442000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd4f39430 a2=0 a3=7fffd4f3941c items=0 ppid=2961 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:40:02.451555 systemd-networkd[1506]: cali9f3a5feeb4f: Gained IPv6LL Jan 23 05:40:02.468000 audit[5029]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:40:02.468000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffd4f39430 a2=0 a3=7fffd4f3941c items=0 ppid=2961 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:02.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:40:02.560536 containerd[1597]: time="2026-01-23T05:40:02.560493375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:02.562521 containerd[1597]: time="2026-01-23T05:40:02.562422053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 05:40:02.562640 containerd[1597]: time="2026-01-23T05:40:02.562576982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:02.563096 kubelet[2784]: E0123 05:40:02.562907 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:40:02.563096 kubelet[2784]: E0123 05:40:02.562984 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:40:02.563271 kubelet[2784]: E0123 05:40:02.563204 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:02.564560 kubelet[2784]: E0123 05:40:02.564494 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:40:03.392107 kubelet[2784]: E0123 05:40:03.392009 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:03.393405 kubelet[2784]: E0123 05:40:03.393188 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:03.393972 kubelet[2784]: E0123 05:40:03.393910 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:40:03.540553 systemd-networkd[1506]: cali7030f29fac5: Gained IPv6LL Jan 23 05:40:06.993295 containerd[1597]: time="2026-01-23T05:40:06.992985068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 05:40:07.065943 containerd[1597]: time="2026-01-23T05:40:07.065869534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:07.067248 containerd[1597]: time="2026-01-23T05:40:07.067160896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 05:40:07.067383 containerd[1597]: time="2026-01-23T05:40:07.067185088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:07.067613 kubelet[2784]: E0123 05:40:07.067569 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:40:07.068135 kubelet[2784]: E0123 05:40:07.067630 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:40:07.068135 kubelet[2784]: E0123 05:40:07.067793 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2155c05c8ee142bb8990bf0ae2991b80,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:07.070920 containerd[1597]: time="2026-01-23T05:40:07.070577543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 05:40:07.135954 containerd[1597]: time="2026-01-23T05:40:07.135786203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:07.137186 containerd[1597]: time="2026-01-23T05:40:07.137139913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 05:40:07.137304 containerd[1597]: time="2026-01-23T05:40:07.137193736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:07.137439 kubelet[2784]: E0123 05:40:07.137389 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:40:07.137546 kubelet[2784]: E0123 05:40:07.137451 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:40:07.137623 kubelet[2784]: E0123 05:40:07.137581 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:07.139202 kubelet[2784]: E0123 05:40:07.139124 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:40:11.995677 containerd[1597]: time="2026-01-23T05:40:11.995608150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:12.062806 containerd[1597]: time="2026-01-23T05:40:12.062749921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:12.064441 containerd[1597]: time="2026-01-23T05:40:12.064386647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:12.064493 containerd[1597]: time="2026-01-23T05:40:12.064485270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:12.064878 kubelet[2784]: E0123 05:40:12.064791 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:12.064878 kubelet[2784]: E0123 05:40:12.064857 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:12.065310 kubelet[2784]: E0123 05:40:12.065111 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhcw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-dhm59_calico-apiserver(9393f034-f86c-4875-9435-8f85b0225d78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:12.066412 kubelet[2784]: E0123 05:40:12.066188 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:40:12.066591 containerd[1597]: time="2026-01-23T05:40:12.066520906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 05:40:12.124854 containerd[1597]: time="2026-01-23T05:40:12.124602085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:12.126185 containerd[1597]: time="2026-01-23T05:40:12.126139208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 05:40:12.126273 containerd[1597]: time="2026-01-23T05:40:12.126192253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:12.126398 kubelet[2784]: E0123 05:40:12.126350 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:40:12.126398 kubelet[2784]: E0123 05:40:12.126407 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:40:12.126629 kubelet[2784]: E0123 05:40:12.126546 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-686jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6986698974-zhprj_calico-system(3cdbf9fd-0ae3-408e-ba62-8b7474385dec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:12.127797 kubelet[2784]: E0123 05:40:12.127746 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:40:13.994414 containerd[1597]: time="2026-01-23T05:40:13.993539092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 05:40:14.070286 containerd[1597]: time="2026-01-23T05:40:14.070137657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:14.073389 containerd[1597]: time="2026-01-23T05:40:14.073330632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 05:40:14.073487 containerd[1597]: time="2026-01-23T05:40:14.073431425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:14.073846 kubelet[2784]: E0123 05:40:14.073758 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:40:14.073846 kubelet[2784]: E0123 05:40:14.073822 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:40:14.074416 kubelet[2784]: E0123 05:40:14.073987 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:14.076813 containerd[1597]: time="2026-01-23T05:40:14.076761021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 05:40:14.140222 containerd[1597]: time="2026-01-23T05:40:14.140047550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:14.141584 containerd[1597]: time="2026-01-23T05:40:14.141488574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 05:40:14.141652 containerd[1597]: time="2026-01-23T05:40:14.141614492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:14.142035 kubelet[2784]: E0123 05:40:14.141899 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:40:14.142035 kubelet[2784]: E0123 05:40:14.141987 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:40:14.142273 kubelet[2784]: E0123 05:40:14.142188 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:14.143555 kubelet[2784]: E0123 05:40:14.143490 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:40:14.994643 containerd[1597]: time="2026-01-23T05:40:14.994320981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:15.073452 containerd[1597]: time="2026-01-23T05:40:15.073388491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:15.074950 containerd[1597]: time="2026-01-23T05:40:15.074831755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:15.074950 containerd[1597]: time="2026-01-23T05:40:15.074925560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:15.075787 kubelet[2784]: E0123 05:40:15.075621 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:15.075787 kubelet[2784]: E0123 05:40:15.075696 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:15.076333 kubelet[2784]: E0123 05:40:15.075909 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrpbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fb784f8d9-hrfdf_calico-apiserver(3912831c-901b-4041-9115-637bb8679bc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:15.077210 kubelet[2784]: E0123 05:40:15.077120 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:40:15.997026 containerd[1597]: time="2026-01-23T05:40:15.996970317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:16.067693 containerd[1597]: time="2026-01-23T05:40:16.067633699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:16.069301 containerd[1597]: time="2026-01-23T05:40:16.069254481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:16.069359 containerd[1597]: time="2026-01-23T05:40:16.069327256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:16.069621 kubelet[2784]: E0123 05:40:16.069573 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:16.069682 kubelet[2784]: E0123 05:40:16.069634 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:16.069914 kubelet[2784]: E0123 05:40:16.069847 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwrcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-b4hm5_calico-apiserver(03afe386-d286-45e1-b2d1-9d888b5a436b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:16.071359 kubelet[2784]: E0123 05:40:16.071268 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:17.996848 containerd[1597]: time="2026-01-23T05:40:17.996767585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 05:40:18.053399 containerd[1597]: time="2026-01-23T05:40:18.053332122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:18.055083 containerd[1597]: time="2026-01-23T05:40:18.054891803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 05:40:18.055083 containerd[1597]: time="2026-01-23T05:40:18.055026105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:18.055448 kubelet[2784]: E0123 05:40:18.055379 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:40:18.055448 kubelet[2784]: E0123 05:40:18.055442 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:40:18.056586 kubelet[2784]: E0123 05:40:18.055597 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgb5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-56x2c_calico-system(f2986fef-16f6-4f5c-ada1-3406bc086cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:18.057070 kubelet[2784]: E0123 05:40:18.056943 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:19.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.10:22-10.0.0.1:42544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:19.785005 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:42544.service - OpenSSH per-connection server daemon (10.0.0.1:42544). Jan 23 05:40:19.795143 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 23 05:40:19.795259 kernel: audit: type=1130 audit(1769146819.784:748): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.10:22-10.0.0.1:42544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:19.909000 audit[5052]: USER_ACCT pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.913767 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:19.914531 sshd[5052]: Accepted publickey for core from 10.0.0.1 port 42544 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:19.919581 systemd-logind[1569]: New session 11 of user core. Jan 23 05:40:19.911000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.934801 kernel: audit: type=1101 audit(1769146819.909:749): pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.934892 kernel: audit: type=1103 audit(1769146819.911:750): pid=5052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.942527 kernel: audit: type=1006 audit(1769146819.911:751): pid=5052 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 05:40:19.911000 audit[5052]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbab28250 a2=3 a3=0 items=0 ppid=1 pid=5052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:19.955811 kernel: audit: type=1300 audit(1769146819.911:751): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbab28250 a2=3 a3=0 items=0 ppid=1 pid=5052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:19.911000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:19.956404 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 05:40:19.961429 kernel: audit: type=1327 audit(1769146819.911:751): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:19.962000 audit[5052]: USER_START pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.978145 kernel: audit: type=1105 audit(1769146819.962:752): pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.969000 audit[5056]: CRED_ACQ pid=5056 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:19.989107 kernel: audit: type=1103 audit(1769146819.969:753): pid=5056 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:20.141571 sshd[5056]: Connection closed by 10.0.0.1 port 42544 Jan 23 05:40:20.142112 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:20.143000 audit[5052]: USER_END pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:20.148785 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:42544.service: Deactivated successfully. Jan 23 05:40:20.152282 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 05:40:20.143000 audit[5052]: CRED_DISP pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:20.154763 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Jan 23 05:40:20.157040 systemd-logind[1569]: Removed session 11. Jan 23 05:40:20.162755 kernel: audit: type=1106 audit(1769146820.143:754): pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:20.163023 kernel: audit: type=1104 audit(1769146820.143:755): pid=5052 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:20.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.10:22-10.0.0.1:42544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:21.994390 kubelet[2784]: E0123 05:40:21.994308 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:40:22.997190 kubelet[2784]: E0123 05:40:22.997106 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:40:25.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.10:22-10.0.0.1:53618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:25.158851 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:53618.service - OpenSSH per-connection server daemon (10.0.0.1:53618). Jan 23 05:40:25.167987 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:40:25.168123 kernel: audit: type=1130 audit(1769146825.157:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.10:22-10.0.0.1:53618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:25.224000 audit[5076]: USER_ACCT pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.228878 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:25.232789 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 53618 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:25.244715 kernel: audit: type=1101 audit(1769146825.224:758): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.244827 kernel: audit: type=1103 audit(1769146825.225:759): pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.225000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.239186 systemd-logind[1569]: New session 12 of user core. Jan 23 05:40:25.225000 audit[5076]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe371bdb20 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:25.257960 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 05:40:25.266138 kernel: audit: type=1006 audit(1769146825.225:760): pid=5076 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 05:40:25.266208 kernel: audit: type=1300 audit(1769146825.225:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe371bdb20 a2=3 a3=0 items=0 ppid=1 pid=5076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:25.267184 kernel: audit: type=1327 audit(1769146825.225:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:25.225000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:25.263000 audit[5076]: USER_START pid=5076 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.293390 kernel: audit: type=1105 audit(1769146825.263:761): pid=5076 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.267000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.303240 kernel: audit: type=1103 audit(1769146825.267:762): pid=5097 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.402222 sshd[5097]: Connection closed by 10.0.0.1 port 53618 Jan 23 05:40:25.402551 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:25.403000 audit[5076]: USER_END pid=5076 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.408795 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:53618.service: Deactivated successfully. Jan 23 05:40:25.413617 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 05:40:25.415095 kernel: audit: type=1106 audit(1769146825.403:763): pid=5076 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.403000 audit[5076]: CRED_DISP pid=5076 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:25.417645 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Jan 23 05:40:25.419849 systemd-logind[1569]: Removed session 12. Jan 23 05:40:25.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.10:22-10.0.0.1:53618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:25.424202 kernel: audit: type=1104 audit(1769146825.403:764): pid=5076 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:26.018457 kubelet[2784]: E0123 05:40:26.018281 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:40:26.018457 kubelet[2784]: E0123 05:40:26.018398 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:40:26.993748 kubelet[2784]: E0123 05:40:26.993591 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:40:28.993237 kubelet[2784]: E0123 05:40:28.993039 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:30.418660 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:53620.service - OpenSSH per-connection server daemon (10.0.0.1:53620). Jan 23 05:40:30.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.10:22-10.0.0.1:53620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:30.421915 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:40:30.421983 kernel: audit: type=1130 audit(1769146830.417:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.10:22-10.0.0.1:53620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:30.518000 audit[5122]: USER_ACCT pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.519759 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 53620 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:30.528000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.530149 kernel: audit: type=1101 audit(1769146830.518:767): pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.530202 kernel: audit: type=1103 audit(1769146830.528:768): pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.531666 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:30.537864 systemd-logind[1569]: New session 13 of user core. Jan 23 05:40:30.545445 kernel: audit: type=1006 audit(1769146830.528:769): pid=5122 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 05:40:30.545522 kernel: audit: type=1300 audit(1769146830.528:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4a2d39d0 a2=3 a3=0 items=0 ppid=1 pid=5122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:30.528000 audit[5122]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4a2d39d0 a2=3 a3=0 items=0 ppid=1 pid=5122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:30.554158 kernel: audit: type=1327 audit(1769146830.528:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:30.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:30.562498 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 05:40:30.564000 audit[5122]: USER_START pid=5122 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.564000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.583752 kernel: audit: type=1105 audit(1769146830.564:770): pid=5122 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.583826 kernel: audit: type=1103 audit(1769146830.564:771): pid=5126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.661308 sshd[5126]: Connection closed by 10.0.0.1 port 53620 Jan 23 05:40:30.661755 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:30.662000 audit[5122]: USER_END pid=5122 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.667314 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:53620.service: Deactivated successfully. Jan 23 05:40:30.670309 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 05:40:30.673002 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Jan 23 05:40:30.675462 systemd-logind[1569]: Removed session 13. Jan 23 05:40:30.662000 audit[5122]: CRED_DISP pid=5122 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.683705 kernel: audit: type=1106 audit(1769146830.662:772): pid=5122 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.683760 kernel: audit: type=1104 audit(1769146830.662:773): pid=5122 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:30.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.10:22-10.0.0.1:53620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:30.993570 kubelet[2784]: E0123 05:40:30.993192 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:33.995507 containerd[1597]: time="2026-01-23T05:40:33.995444667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 05:40:34.063045 containerd[1597]: time="2026-01-23T05:40:34.062979559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:34.064483 containerd[1597]: time="2026-01-23T05:40:34.064436096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 05:40:34.064594 containerd[1597]: time="2026-01-23T05:40:34.064537549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:34.064747 kubelet[2784]: E0123 05:40:34.064708 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:40:34.065222 kubelet[2784]: E0123 05:40:34.064763 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:40:34.067417 kubelet[2784]: E0123 05:40:34.067353 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2155c05c8ee142bb8990bf0ae2991b80,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:34.069959 containerd[1597]: time="2026-01-23T05:40:34.069890027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 05:40:34.144259 containerd[1597]: time="2026-01-23T05:40:34.144022702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:34.145611 containerd[1597]: time="2026-01-23T05:40:34.145569147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 05:40:34.146148 containerd[1597]: time="2026-01-23T05:40:34.145735078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:34.146203 kubelet[2784]: E0123 05:40:34.145880 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:40:34.146203 kubelet[2784]: E0123 05:40:34.145928 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:40:34.146203 kubelet[2784]: E0123 05:40:34.146042 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:34.147404 kubelet[2784]: E0123 05:40:34.147331 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:40:34.993568 containerd[1597]: time="2026-01-23T05:40:34.992950434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:35.054306 containerd[1597]: time="2026-01-23T05:40:35.054225355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:35.055725 containerd[1597]: time="2026-01-23T05:40:35.055506034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:35.055725 containerd[1597]: time="2026-01-23T05:40:35.055567636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:35.056258 kubelet[2784]: E0123 05:40:35.055824 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:35.056258 kubelet[2784]: E0123 05:40:35.055894 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:35.056258 kubelet[2784]: E0123 05:40:35.056031 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhcw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-dhm59_calico-apiserver(9393f034-f86c-4875-9435-8f85b0225d78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:35.057436 kubelet[2784]: E0123 05:40:35.057393 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:40:35.677757 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:47898.service - OpenSSH per-connection server daemon (10.0.0.1:47898). Jan 23 05:40:35.702200 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:40:35.702258 kernel: audit: type=1130 audit(1769146835.676:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.10:22-10.0.0.1:47898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:35.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.10:22-10.0.0.1:47898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:35.795000 audit[5148]: USER_ACCT pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.800122 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:35.807780 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 47898 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:35.808143 kernel: audit: type=1101 audit(1769146835.795:776): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.797000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.810636 systemd-logind[1569]: New session 14 of user core. Jan 23 05:40:35.823141 kernel: audit: type=1103 audit(1769146835.797:777): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.823318 kernel: audit: type=1006 audit(1769146835.797:778): pid=5148 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 23 05:40:35.823355 kernel: audit: type=1300 audit(1769146835.797:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffede437110 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:35.797000 audit[5148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffede437110 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:35.797000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:35.838813 kernel: audit: type=1327 audit(1769146835.797:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:35.843586 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 05:40:35.848000 audit[5148]: USER_START pid=5148 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.851000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.888375 kernel: audit: type=1105 audit(1769146835.848:779): pid=5148 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:35.888505 kernel: audit: type=1103 audit(1769146835.851:780): pid=5152 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:36.016915 sshd[5152]: Connection closed by 10.0.0.1 port 47898 Jan 23 05:40:36.016814 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:36.018000 audit[5148]: USER_END pid=5148 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:36.024606 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:47898.service: Deactivated successfully. Jan 23 05:40:36.028857 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 05:40:36.018000 audit[5148]: CRED_DISP pid=5148 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:36.033355 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Jan 23 05:40:36.035555 systemd-logind[1569]: Removed session 14. Jan 23 05:40:36.042348 kernel: audit: type=1106 audit(1769146836.018:781): pid=5148 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:36.042483 kernel: audit: type=1104 audit(1769146836.018:782): pid=5148 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:36.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.10:22-10.0.0.1:47898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:36.992111 kubelet[2784]: E0123 05:40:36.991876 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:36.992111 kubelet[2784]: E0123 05:40:36.992031 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:37.997032 containerd[1597]: time="2026-01-23T05:40:37.996771882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 05:40:38.055113 containerd[1597]: time="2026-01-23T05:40:38.054888957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:38.057257 containerd[1597]: time="2026-01-23T05:40:38.057207141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 05:40:38.057402 containerd[1597]: time="2026-01-23T05:40:38.057358864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:38.057636 kubelet[2784]: E0123 05:40:38.057585 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:40:38.058810 kubelet[2784]: E0123 05:40:38.058302 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:40:38.058810 kubelet[2784]: E0123 05:40:38.058713 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-686jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6986698974-zhprj_calico-system(3cdbf9fd-0ae3-408e-ba62-8b7474385dec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:38.060147 kubelet[2784]: E0123 05:40:38.060096 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:40:38.995896 containerd[1597]: time="2026-01-23T05:40:38.993544785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 05:40:39.092231 containerd[1597]: time="2026-01-23T05:40:39.092122729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:39.094500 containerd[1597]: time="2026-01-23T05:40:39.094383287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 05:40:39.094500 containerd[1597]: time="2026-01-23T05:40:39.094440677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:39.095007 kubelet[2784]: E0123 05:40:39.094850 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:40:39.095007 kubelet[2784]: E0123 05:40:39.094949 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:40:39.096006 kubelet[2784]: E0123 05:40:39.095364 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:39.099235 containerd[1597]: time="2026-01-23T05:40:39.099126924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 05:40:39.174822 containerd[1597]: time="2026-01-23T05:40:39.173413714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:39.183390 containerd[1597]: time="2026-01-23T05:40:39.182574268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:39.183877 containerd[1597]: time="2026-01-23T05:40:39.183373098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 05:40:39.202303 kubelet[2784]: E0123 05:40:39.201821 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:40:39.202303 kubelet[2784]: E0123 05:40:39.202253 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:40:39.202996 kubelet[2784]: E0123 05:40:39.202904 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:39.205003 kubelet[2784]: E0123 05:40:39.204868 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:40:40.996886 containerd[1597]: time="2026-01-23T05:40:40.996798150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:41.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.10:22-10.0.0.1:47906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:41.037589 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:47906.service - OpenSSH per-connection server daemon (10.0.0.1:47906). Jan 23 05:40:41.040270 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:40:41.040342 kernel: audit: type=1130 audit(1769146841.036:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.10:22-10.0.0.1:47906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:41.084431 containerd[1597]: time="2026-01-23T05:40:41.084029556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:41.103748 containerd[1597]: time="2026-01-23T05:40:41.103528265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:41.104380 containerd[1597]: time="2026-01-23T05:40:41.104320130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:41.104704 kubelet[2784]: E0123 05:40:41.104625 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:41.105383 kubelet[2784]: E0123 05:40:41.104715 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:41.105383 kubelet[2784]: E0123 05:40:41.104876 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrpbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fb784f8d9-hrfdf_calico-apiserver(3912831c-901b-4041-9115-637bb8679bc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:41.106501 kubelet[2784]: E0123 05:40:41.106422 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:40:41.149000 audit[5166]: USER_ACCT pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.156105 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 47906 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:41.162147 kernel: audit: type=1101 audit(1769146841.149:785): pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.163000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.167648 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:41.182258 kernel: audit: type=1103 audit(1769146841.163:786): pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.163000 audit[5166]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff64cd59d0 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:41.212197 kernel: audit: type=1006 audit(1769146841.163:787): pid=5166 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 23 05:40:41.212295 kernel: audit: type=1300 audit(1769146841.163:787): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff64cd59d0 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:41.163000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:41.215344 systemd-logind[1569]: New session 15 of user core. Jan 23 05:40:41.216708 kernel: audit: type=1327 audit(1769146841.163:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:41.221611 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 05:40:41.228000 audit[5166]: USER_START pid=5166 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.243127 kernel: audit: type=1105 audit(1769146841.228:788): pid=5166 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.243199 kernel: audit: type=1103 audit(1769146841.233:789): pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.233000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.426918 sshd[5170]: Connection closed by 10.0.0.1 port 47906 Jan 23 05:40:41.429755 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:41.431000 audit[5166]: USER_END pid=5166 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.444467 kernel: audit: type=1106 audit(1769146841.431:790): pid=5166 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.447528 kernel: audit: type=1104 audit(1769146841.431:791): pid=5166 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.431000 audit[5166]: CRED_DISP pid=5166 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.461415 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:47906.service: Deactivated successfully. Jan 23 05:40:41.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.10:22-10.0.0.1:47906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:41.464898 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 05:40:41.467021 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Jan 23 05:40:41.474728 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:47918.service - OpenSSH per-connection server daemon (10.0.0.1:47918). Jan 23 05:40:41.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.10:22-10.0.0.1:47918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:41.476905 systemd-logind[1569]: Removed session 15. Jan 23 05:40:41.555000 audit[5184]: USER_ACCT pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.557306 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 47918 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:41.561000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.562000 audit[5184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe29175b60 a2=3 a3=0 items=0 ppid=1 pid=5184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:41.562000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:41.568141 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:41.595488 systemd-logind[1569]: New session 16 of user core. Jan 23 05:40:41.599826 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 05:40:41.605000 audit[5184]: USER_START pid=5184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.608000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.866726 sshd[5188]: Connection closed by 10.0.0.1 port 47918 Jan 23 05:40:41.867900 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:41.875000 audit[5184]: USER_END pid=5184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.875000 audit[5184]: CRED_DISP pid=5184 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:41.891296 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:47918.service: Deactivated successfully. Jan 23 05:40:41.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.10:22-10.0.0.1:47918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:41.895902 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 05:40:41.898338 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Jan 23 05:40:41.904394 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:47924.service - OpenSSH per-connection server daemon (10.0.0.1:47924). Jan 23 05:40:41.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.10:22-10.0.0.1:47924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:41.906562 systemd-logind[1569]: Removed session 16. Jan 23 05:40:41.998466 containerd[1597]: time="2026-01-23T05:40:41.998357342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:40:42.005000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:42.007959 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 47924 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:42.008000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:42.008000 audit[5199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe902ecf50 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:42.008000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:42.011976 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:42.022588 systemd-logind[1569]: New session 17 of user core. Jan 23 05:40:42.032467 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 05:40:42.039000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:42.042000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:42.074905 containerd[1597]: time="2026-01-23T05:40:42.074351800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:42.094696 containerd[1597]: time="2026-01-23T05:40:42.094108818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:42.094696 containerd[1597]: time="2026-01-23T05:40:42.094152472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:40:42.098994 kubelet[2784]: E0123 05:40:42.098106 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:42.098994 kubelet[2784]: E0123 05:40:42.098161 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:40:42.098994 kubelet[2784]: E0123 05:40:42.098385 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwrcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-b4hm5_calico-apiserver(03afe386-d286-45e1-b2d1-9d888b5a436b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:42.100762 kubelet[2784]: E0123 05:40:42.100696 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:42.241281 sshd[5203]: Connection closed by 10.0.0.1 port 47924 Jan 23 05:40:42.241845 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:42.246000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:42.246000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:42.253256 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:47924.service: Deactivated successfully. Jan 23 05:40:42.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.10:22-10.0.0.1:47924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:42.256860 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 05:40:42.258868 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Jan 23 05:40:42.262832 systemd-logind[1569]: Removed session 17. Jan 23 05:40:43.993923 kubelet[2784]: E0123 05:40:43.992758 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:43.995460 containerd[1597]: time="2026-01-23T05:40:43.995417889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 05:40:44.060490 containerd[1597]: time="2026-01-23T05:40:44.060307673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:40:44.062151 containerd[1597]: time="2026-01-23T05:40:44.062030936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 05:40:44.062220 containerd[1597]: time="2026-01-23T05:40:44.062171534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 05:40:44.062514 kubelet[2784]: E0123 05:40:44.062374 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:40:44.062514 kubelet[2784]: E0123 05:40:44.062456 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:40:44.062751 kubelet[2784]: E0123 05:40:44.062635 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgb5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-56x2c_calico-system(f2986fef-16f6-4f5c-ada1-3406bc086cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 05:40:44.064255 kubelet[2784]: E0123 05:40:44.064012 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:47.270454 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 05:40:47.270624 kernel: audit: type=1130 audit(1769146847.257:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.10:22-10.0.0.1:33736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:47.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.10:22-10.0.0.1:33736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:47.258216 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Jan 23 05:40:47.339000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.342460 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:47.344981 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:47.353169 kernel: audit: type=1101 audit(1769146847.339:812): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.353239 kernel: audit: type=1103 audit(1769146847.342:813): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.342000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.352650 systemd-logind[1569]: New session 18 of user core. Jan 23 05:40:47.374453 kernel: audit: type=1006 audit(1769146847.342:814): pid=5219 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 05:40:47.374580 kernel: audit: type=1300 audit(1769146847.342:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2eadbf70 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:47.342000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2eadbf70 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:47.342000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:47.396341 kernel: audit: type=1327 audit(1769146847.342:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:47.396741 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 05:40:47.402000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.417146 kernel: audit: type=1105 audit(1769146847.402:815): pid=5219 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.404000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.431154 kernel: audit: type=1103 audit(1769146847.404:816): pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.507603 sshd[5223]: Connection closed by 10.0.0.1 port 33736 Jan 23 05:40:47.509288 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:47.510000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.517472 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Jan 23 05:40:47.518790 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:33736.service: Deactivated successfully. Jan 23 05:40:47.511000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.526574 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 05:40:47.533916 kernel: audit: type=1106 audit(1769146847.510:817): pid=5219 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.534006 kernel: audit: type=1104 audit(1769146847.511:818): pid=5219 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:47.532968 systemd-logind[1569]: Removed session 18. Jan 23 05:40:47.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.10:22-10.0.0.1:33736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:47.994194 kubelet[2784]: E0123 05:40:47.994105 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:48.996825 kubelet[2784]: E0123 05:40:48.996563 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:40:49.995379 kubelet[2784]: E0123 05:40:49.995258 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:40:49.997339 kubelet[2784]: E0123 05:40:49.997255 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:40:51.992044 kubelet[2784]: E0123 05:40:51.991949 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:40:52.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.10:22-10.0.0.1:59736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:52.528156 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:40:52.528217 kernel: audit: type=1130 audit(1769146852.525:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.10:22-10.0.0.1:59736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:52.525964 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:59736.service - OpenSSH per-connection server daemon (10.0.0.1:59736). Jan 23 05:40:52.615000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.629041 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 59736 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:52.629544 kernel: audit: type=1101 audit(1769146852.615:821): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.633334 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:52.630000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.653017 kernel: audit: type=1103 audit(1769146852.630:822): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.653596 kernel: audit: type=1006 audit(1769146852.630:823): pid=5236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 23 05:40:52.630000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc001466b0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:52.667480 kernel: audit: type=1300 audit(1769146852.630:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc001466b0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:52.667536 kernel: audit: type=1327 audit(1769146852.630:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:52.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:52.670249 systemd-logind[1569]: New session 19 of user core. Jan 23 05:40:52.682429 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 05:40:52.692000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.703143 kernel: audit: type=1105 audit(1769146852.692:824): pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.702000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.715114 kernel: audit: type=1103 audit(1769146852.702:825): pid=5240 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.814469 sshd[5240]: Connection closed by 10.0.0.1 port 59736 Jan 23 05:40:52.815251 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:52.815000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.820199 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:59736.service: Deactivated successfully. Jan 23 05:40:52.822851 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 05:40:52.824944 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Jan 23 05:40:52.816000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.826858 systemd-logind[1569]: Removed session 19. Jan 23 05:40:52.833736 kernel: audit: type=1106 audit(1769146852.815:826): pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.833789 kernel: audit: type=1104 audit(1769146852.816:827): pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:52.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.10:22-10.0.0.1:59736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:52.993042 kubelet[2784]: E0123 05:40:52.992710 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:40:54.993499 kubelet[2784]: E0123 05:40:54.993398 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:40:55.996717 kubelet[2784]: E0123 05:40:55.995939 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:40:55.996717 kubelet[2784]: E0123 05:40:55.996341 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:40:57.832948 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:59740.service - OpenSSH per-connection server daemon (10.0.0.1:59740). Jan 23 05:40:57.837111 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:40:57.837193 kernel: audit: type=1130 audit(1769146857.832:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.10:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:57.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.10:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:57.935000 audit[5284]: USER_ACCT pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.936619 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:40:57.939970 sshd-session[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:40:57.935000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.948450 systemd-logind[1569]: New session 20 of user core. Jan 23 05:40:57.954198 kernel: audit: type=1101 audit(1769146857.935:830): pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.954272 kernel: audit: type=1103 audit(1769146857.935:831): pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.959366 kernel: audit: type=1006 audit(1769146857.935:832): pid=5284 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 23 05:40:57.935000 audit[5284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc38f97f40 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:57.967621 kernel: audit: type=1300 audit(1769146857.935:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc38f97f40 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:40:57.967717 kernel: audit: type=1327 audit(1769146857.935:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:57.935000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:40:57.974634 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 05:40:57.978000 audit[5284]: USER_START pid=5284 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.980000 audit[5288]: CRED_ACQ pid=5288 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.995939 kernel: audit: type=1105 audit(1769146857.978:833): pid=5284 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:57.996752 kernel: audit: type=1103 audit(1769146857.980:834): pid=5288 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:58.108172 sshd[5288]: Connection closed by 10.0.0.1 port 59740 Jan 23 05:40:58.109723 sshd-session[5284]: pam_unix(sshd:session): session closed for user core Jan 23 05:40:58.111000 audit[5284]: USER_END pid=5284 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:58.111000 audit[5284]: CRED_DISP pid=5284 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:58.127658 kernel: audit: type=1106 audit(1769146858.111:835): pid=5284 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:58.127753 kernel: audit: type=1104 audit(1769146858.111:836): pid=5284 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:40:58.130625 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:59740.service: Deactivated successfully. Jan 23 05:40:58.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.10:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:40:58.132886 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 05:40:58.133853 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Jan 23 05:40:58.135660 systemd-logind[1569]: Removed session 20. Jan 23 05:40:59.997382 kubelet[2784]: E0123 05:40:59.997334 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:41:01.992433 kubelet[2784]: E0123 05:41:01.992284 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:41:01.996386 kubelet[2784]: E0123 05:41:01.996335 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:41:03.126509 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:47504.service - OpenSSH per-connection server daemon (10.0.0.1:47504). Jan 23 05:41:03.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.10:22-10.0.0.1:47504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:03.129123 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:41:03.129185 kernel: audit: type=1130 audit(1769146863.125:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.10:22-10.0.0.1:47504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:03.226000 audit[5301]: USER_ACCT pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.228299 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 47504 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:03.237635 kernel: audit: type=1101 audit(1769146863.226:839): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.243247 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:03.237000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.259431 kernel: audit: type=1103 audit(1769146863.237:840): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.263315 systemd-logind[1569]: New session 21 of user core. Jan 23 05:41:03.237000 audit[5301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5439fa40 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:03.280767 kernel: audit: type=1006 audit(1769146863.237:841): pid=5301 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 23 05:41:03.280903 kernel: audit: type=1300 audit(1769146863.237:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5439fa40 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:03.280954 kernel: audit: type=1327 audit(1769146863.237:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:03.237000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:03.288458 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 05:41:03.292000 audit[5301]: USER_START pid=5301 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.294000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.315087 kernel: audit: type=1105 audit(1769146863.292:842): pid=5301 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.315180 kernel: audit: type=1103 audit(1769146863.294:843): pid=5305 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.421792 sshd[5305]: Connection closed by 10.0.0.1 port 47504 Jan 23 05:41:03.422311 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:03.425000 audit[5301]: USER_END pid=5301 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.427000 audit[5301]: CRED_DISP pid=5301 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.454112 kernel: audit: type=1106 audit(1769146863.425:844): pid=5301 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.454312 kernel: audit: type=1104 audit(1769146863.427:845): pid=5301 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:03.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.10:22-10.0.0.1:47504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:03.451802 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:47504.service: Deactivated successfully. Jan 23 05:41:03.456440 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 05:41:03.457725 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Jan 23 05:41:03.461103 systemd-logind[1569]: Removed session 21. Jan 23 05:41:03.997773 kubelet[2784]: E0123 05:41:03.997591 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:41:06.995109 kubelet[2784]: E0123 05:41:06.994962 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:41:07.992764 kubelet[2784]: E0123 05:41:07.992412 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:41:07.994103 kubelet[2784]: E0123 05:41:07.994014 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:41:08.439316 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:47516.service - OpenSSH per-connection server daemon (10.0.0.1:47516). Jan 23 05:41:08.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.10:22-10.0.0.1:47516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:08.442178 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:41:08.442499 kernel: audit: type=1130 audit(1769146868.438:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.10:22-10.0.0.1:47516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:08.511000 audit[5318]: USER_ACCT pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.512375 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 47516 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:08.514712 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:08.512000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.528420 kernel: audit: type=1101 audit(1769146868.511:848): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.528503 kernel: audit: type=1103 audit(1769146868.512:849): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.533293 kernel: audit: type=1006 audit(1769146868.512:850): pid=5318 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 23 05:41:08.512000 audit[5318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9669900 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:08.543096 kernel: audit: type=1300 audit(1769146868.512:850): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9669900 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:08.543156 kernel: audit: type=1327 audit(1769146868.512:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:08.512000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:08.545433 systemd-logind[1569]: New session 22 of user core. Jan 23 05:41:08.556386 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 05:41:08.562000 audit[5318]: USER_START pid=5318 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.565000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.584100 kernel: audit: type=1105 audit(1769146868.562:851): pid=5318 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.584200 kernel: audit: type=1103 audit(1769146868.565:852): pid=5322 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.705114 sshd[5322]: Connection closed by 10.0.0.1 port 47516 Jan 23 05:41:08.709465 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:08.730892 kernel: audit: type=1106 audit(1769146868.711:853): pid=5318 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.711000 audit[5318]: USER_END pid=5318 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.737123 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Jan 23 05:41:08.739248 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:47530.service - OpenSSH per-connection server daemon (10.0.0.1:47530). Jan 23 05:41:08.741550 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:47516.service: Deactivated successfully. Jan 23 05:41:08.743891 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 05:41:08.726000 audit[5318]: CRED_DISP pid=5318 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.749653 systemd-logind[1569]: Removed session 22. Jan 23 05:41:08.758109 kernel: audit: type=1104 audit(1769146868.726:854): pid=5318 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.10:22-10.0.0.1:47530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:08.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.10:22-10.0.0.1:47516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:08.843000 audit[5332]: USER_ACCT pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.844551 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 47530 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:08.844000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.845000 audit[5332]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff39612710 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:08.845000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:08.847444 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:08.855345 systemd-logind[1569]: New session 23 of user core. Jan 23 05:41:08.863978 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 05:41:08.869000 audit[5332]: USER_START pid=5332 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.873000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:08.993379 kubelet[2784]: E0123 05:41:08.993203 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:41:09.238339 sshd[5339]: Connection closed by 10.0.0.1 port 47530 Jan 23 05:41:09.237408 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:09.239000 audit[5332]: USER_END pid=5332 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:09.241000 audit[5332]: CRED_DISP pid=5332 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:09.252520 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:47530.service: Deactivated successfully. Jan 23 05:41:09.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.10:22-10.0.0.1:47530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:09.255632 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 05:41:09.259867 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Jan 23 05:41:09.265480 systemd-logind[1569]: Removed session 23. Jan 23 05:41:09.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.10:22-10.0.0.1:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:09.270357 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:47536.service - OpenSSH per-connection server daemon (10.0.0.1:47536). Jan 23 05:41:09.346000 audit[5351]: USER_ACCT pid=5351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:09.348228 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 47536 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:09.348000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:09.348000 audit[5351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe1775fd0 a2=3 a3=0 items=0 ppid=1 pid=5351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:09.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:09.351604 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:09.359459 systemd-logind[1569]: New session 24 of user core. Jan 23 05:41:09.372315 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 05:41:09.375000 audit[5351]: USER_START pid=5351 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:09.378000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:09.994600 kubelet[2784]: E0123 05:41:09.994501 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:41:10.091000 audit[5371]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:41:10.091000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc72261900 a2=0 a3=7ffc722618ec items=0 ppid=2961 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:10.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:41:10.094947 sshd[5355]: Connection closed by 10.0.0.1 port 47536 Jan 23 05:41:10.095891 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:10.097000 audit[5371]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:41:10.097000 audit[5371]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc72261900 a2=0 a3=0 items=0 ppid=2961 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:10.098000 audit[5351]: USER_END pid=5351 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:41:10.098000 audit[5351]: CRED_DISP pid=5351 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.112658 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:47536.service: Deactivated successfully. Jan 23 05:41:10.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.10:22-10.0.0.1:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:10.116876 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 05:41:10.127350 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Jan 23 05:41:10.130583 systemd-logind[1569]: Removed session 24. Jan 23 05:41:10.137462 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:47540.service - OpenSSH per-connection server daemon (10.0.0.1:47540). Jan 23 05:41:10.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.10:22-10.0.0.1:47540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:10.149000 audit[5378]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:41:10.149000 audit[5378]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc7da121e0 a2=0 a3=7ffc7da121cc items=0 ppid=2961 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:10.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:41:10.156000 audit[5378]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:41:10.156000 audit[5378]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc7da121e0 a2=0 a3=0 items=0 ppid=2961 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:10.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:41:10.223000 audit[5377]: USER_ACCT pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.225362 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 47540 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:10.225000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.225000 audit[5377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd374ef890 a2=3 a3=0 items=0 ppid=1 pid=5377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:10.225000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:10.228888 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:10.236551 systemd-logind[1569]: New session 25 of user core. Jan 23 05:41:10.248425 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 05:41:10.252000 audit[5377]: USER_START pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.254000 audit[5382]: CRED_ACQ pid=5382 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.483742 sshd[5382]: Connection closed by 10.0.0.1 port 47540 Jan 23 05:41:10.484245 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:10.484000 audit[5377]: USER_END pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.485000 audit[5377]: CRED_DISP pid=5377 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.498451 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:47540.service: Deactivated successfully. Jan 23 05:41:10.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.10:22-10.0.0.1:47540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:10.501528 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 05:41:10.504249 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Jan 23 05:41:10.506028 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:47544.service - OpenSSH per-connection server daemon (10.0.0.1:47544). Jan 23 05:41:10.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.10:22-10.0.0.1:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:10.507952 systemd-logind[1569]: Removed session 25. Jan 23 05:41:10.571000 audit[5393]: USER_ACCT pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.573199 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 47544 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:10.573000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.573000 audit[5393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5d1fad50 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:10.573000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:10.577850 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:10.585936 systemd-logind[1569]: New session 26 of user core. Jan 23 05:41:10.592235 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 05:41:10.596000 audit[5393]: USER_START pid=5393 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.599000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.727800 sshd[5397]: Connection closed by 10.0.0.1 port 47544 Jan 23 05:41:10.729168 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:10.731000 audit[5393]: USER_END pid=5393 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.731000 audit[5393]: CRED_DISP pid=5393 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:10.736519 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:47544.service: Deactivated successfully. Jan 23 05:41:10.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.10:22-10.0.0.1:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:10.739875 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 05:41:10.744863 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Jan 23 05:41:10.746871 systemd-logind[1569]: Removed session 26. Jan 23 05:41:13.991954 kubelet[2784]: E0123 05:41:13.991888 2784 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 05:41:13.995396 kubelet[2784]: E0123 05:41:13.995340 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:41:15.743001 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:41286.service - OpenSSH per-connection server daemon (10.0.0.1:41286). Jan 23 05:41:15.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.10:22-10.0.0.1:41286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:15.745851 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 23 05:41:15.745939 kernel: audit: type=1130 audit(1769146875.742:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.10:22-10.0.0.1:41286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:15.813000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.814477 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 41286 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:15.819566 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:15.815000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.827342 systemd-logind[1569]: New session 27 of user core. Jan 23 05:41:15.830350 kernel: audit: type=1101 audit(1769146875.813:897): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.830417 kernel: audit: type=1103 audit(1769146875.815:898): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.830471 kernel: audit: type=1006 audit(1769146875.815:899): pid=5416 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 05:41:15.835498 kernel: audit: type=1300 audit(1769146875.815:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3809280 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:15.815000 audit[5416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff3809280 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:15.844960 kernel: audit: type=1327 audit(1769146875.815:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:15.815000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:15.857474 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 05:41:15.861000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.862000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.889670 kernel: audit: type=1105 audit(1769146875.861:900): pid=5416 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.889807 kernel: audit: type=1103 audit(1769146875.862:901): pid=5420 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.973938 sshd[5420]: Connection closed by 10.0.0.1 port 41286 Jan 23 05:41:15.974432 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:15.975000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.981316 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:41286.service: Deactivated successfully. Jan 23 05:41:15.984435 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 05:41:15.985861 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Jan 23 05:41:15.987555 systemd-logind[1569]: Removed session 27. Jan 23 05:41:15.975000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.998535 kernel: audit: type=1106 audit(1769146875.975:902): pid=5416 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.998651 kernel: audit: type=1104 audit(1769146875.975:903): pid=5416 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:15.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.10:22-10.0.0.1:41286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:16.924000 audit[5435]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:41:16.924000 audit[5435]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff2d5da50 a2=0 a3=7ffff2d5da3c items=0 ppid=2961 pid=5435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:16.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:41:16.937000 audit[5435]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 05:41:16.937000 audit[5435]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffff2d5da50 a2=0 a3=7ffff2d5da3c items=0 ppid=2961 pid=5435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:16.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 05:41:16.994776 kubelet[2784]: E0123 05:41:16.994626 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:41:18.993850 containerd[1597]: time="2026-01-23T05:41:18.993770319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:41:19.063788 containerd[1597]: time="2026-01-23T05:41:19.063738126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:19.065254 containerd[1597]: time="2026-01-23T05:41:19.065188923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:41:19.065312 containerd[1597]: time="2026-01-23T05:41:19.065264857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:19.065557 kubelet[2784]: E0123 05:41:19.065433 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:41:19.065557 kubelet[2784]: E0123 05:41:19.065508 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:41:19.066026 kubelet[2784]: E0123 05:41:19.065614 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhcw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68f956777f-dhm59_calico-apiserver(9393f034-f86c-4875-9435-8f85b0225d78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:19.067305 kubelet[2784]: E0123 05:41:19.067180 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:41:20.994163 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:41302.service - OpenSSH per-connection server daemon (10.0.0.1:41302). Jan 23 05:41:20.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.10:22-10.0.0.1:41302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:20.995836 kubelet[2784]: E0123 05:41:20.995720 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2" Jan 23 05:41:20.997597 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 05:41:20.998020 kernel: audit: type=1130 audit(1769146880.993:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.10:22-10.0.0.1:41302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:21.061000 audit[5439]: USER_ACCT pid=5439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.062594 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 41302 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:21.065100 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:21.088601 kernel: audit: type=1101 audit(1769146881.061:908): pid=5439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.088765 kernel: audit: type=1103 audit(1769146881.062:909): pid=5439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.062000 audit[5439]: CRED_ACQ pid=5439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.076013 systemd-logind[1569]: New session 28 of user core. Jan 23 05:41:21.062000 audit[5439]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf30d17a0 a2=3 a3=0 items=0 ppid=1 pid=5439 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:21.105414 kernel: audit: type=1006 audit(1769146881.062:910): pid=5439 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 23 05:41:21.105484 kernel: audit: type=1300 audit(1769146881.062:910): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf30d17a0 a2=3 a3=0 items=0 ppid=1 pid=5439 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:21.062000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:21.106565 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 05:41:21.109726 kernel: audit: type=1327 audit(1769146881.062:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:21.111000 audit[5439]: USER_START pid=5439 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.121000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.131734 kernel: audit: type=1105 audit(1769146881.111:911): pid=5439 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.131888 kernel: audit: type=1103 audit(1769146881.121:912): pid=5443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.232728 sshd[5443]: Connection closed by 10.0.0.1 port 41302 Jan 23 05:41:21.235327 sshd-session[5439]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:21.236000 audit[5439]: USER_END pid=5439 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.244526 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:41302.service: Deactivated successfully. Jan 23 05:41:21.247486 systemd-logind[1569]: Session 28 logged out. Waiting for processes to exit. Jan 23 05:41:21.252475 kernel: audit: type=1106 audit(1769146881.236:913): pid=5439 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.252142 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 05:41:21.236000 audit[5439]: CRED_DISP pid=5439 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.259488 systemd-logind[1569]: Removed session 28. Jan 23 05:41:21.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.10:22-10.0.0.1:41302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:21.266139 kernel: audit: type=1104 audit(1769146881.236:914): pid=5439 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:21.993295 kubelet[2784]: E0123 05:41:21.993240 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-b4hm5" podUID="03afe386-d286-45e1-b2d1-9d888b5a436b" Jan 23 05:41:21.993295 kubelet[2784]: E0123 05:41:21.993239 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:41:22.994125 containerd[1597]: time="2026-01-23T05:41:22.993816860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 05:41:23.054024 containerd[1597]: time="2026-01-23T05:41:23.053930917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:23.055915 containerd[1597]: time="2026-01-23T05:41:23.055810375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 05:41:23.055915 containerd[1597]: time="2026-01-23T05:41:23.055849187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:23.056364 kubelet[2784]: E0123 05:41:23.056312 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:41:23.056899 kubelet[2784]: E0123 05:41:23.056367 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 05:41:23.056899 kubelet[2784]: E0123 05:41:23.056489 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-686jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6986698974-zhprj_calico-system(3cdbf9fd-0ae3-408e-ba62-8b7474385dec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:23.058010 kubelet[2784]: E0123 05:41:23.057932 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6986698974-zhprj" podUID="3cdbf9fd-0ae3-408e-ba62-8b7474385dec" Jan 23 05:41:24.993109 containerd[1597]: time="2026-01-23T05:41:24.992964475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 05:41:25.077766 containerd[1597]: time="2026-01-23T05:41:25.077637817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:25.079380 containerd[1597]: time="2026-01-23T05:41:25.079216642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 05:41:25.079380 containerd[1597]: time="2026-01-23T05:41:25.079261438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:25.079681 kubelet[2784]: E0123 05:41:25.079582 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:41:25.080220 kubelet[2784]: E0123 05:41:25.079728 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 05:41:25.080220 kubelet[2784]: E0123 05:41:25.079870 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2155c05c8ee142bb8990bf0ae2991b80,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:25.083382 containerd[1597]: time="2026-01-23T05:41:25.083294360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 05:41:25.136642 containerd[1597]: time="2026-01-23T05:41:25.136544946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:25.137970 containerd[1597]: time="2026-01-23T05:41:25.137922039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 05:41:25.138096 containerd[1597]: time="2026-01-23T05:41:25.138018130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:25.138331 kubelet[2784]: E0123 05:41:25.138252 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:41:25.138445 kubelet[2784]: E0123 05:41:25.138400 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 05:41:25.138760 kubelet[2784]: E0123 05:41:25.138675 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2srbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-749c8bf99-5hnhn_calico-system(43eecc48-4e9e-429e-8243-803259cf177c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:25.140189 kubelet[2784]: E0123 05:41:25.140102 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749c8bf99-5hnhn" podUID="43eecc48-4e9e-429e-8243-803259cf177c" Jan 23 05:41:26.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.10:22-10.0.0.1:33642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:26.249636 systemd[1]: Started sshd@27-10.0.0.10:22-10.0.0.1:33642.service - OpenSSH per-connection server daemon (10.0.0.1:33642). Jan 23 05:41:26.251346 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:41:26.251413 kernel: audit: type=1130 audit(1769146886.248:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.10:22-10.0.0.1:33642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:26.313000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.314393 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 33642 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:26.316853 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:26.314000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.323196 systemd-logind[1569]: New session 29 of user core. Jan 23 05:41:26.330169 kernel: audit: type=1101 audit(1769146886.313:917): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.330356 kernel: audit: type=1103 audit(1769146886.314:918): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.330403 kernel: audit: type=1006 audit(1769146886.314:919): pid=5500 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 23 05:41:26.314000 audit[5500]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd00411370 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:26.344146 kernel: audit: type=1300 audit(1769146886.314:919): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd00411370 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:26.344265 kernel: audit: type=1327 audit(1769146886.314:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:26.314000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:26.349489 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 05:41:26.353000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.353000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.371505 kernel: audit: type=1105 audit(1769146886.353:920): pid=5500 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.371572 kernel: audit: type=1103 audit(1769146886.353:921): pid=5504 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.467736 sshd[5504]: Connection closed by 10.0.0.1 port 33642 Jan 23 05:41:26.468144 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:26.470000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.475012 systemd[1]: sshd@27-10.0.0.10:22-10.0.0.1:33642.service: Deactivated successfully. Jan 23 05:41:26.478028 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 05:41:26.480258 systemd-logind[1569]: Session 29 logged out. Waiting for processes to exit. Jan 23 05:41:26.483371 systemd-logind[1569]: Removed session 29. Jan 23 05:41:26.486147 kernel: audit: type=1106 audit(1769146886.470:922): pid=5500 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.470000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:26.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.10:22-10.0.0.1:33642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:26.498124 kernel: audit: type=1104 audit(1769146886.470:923): pid=5500 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:28.992559 containerd[1597]: time="2026-01-23T05:41:28.992499103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 05:41:29.076582 containerd[1597]: time="2026-01-23T05:41:29.076473105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:29.078383 containerd[1597]: time="2026-01-23T05:41:29.078277682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 05:41:29.078445 containerd[1597]: time="2026-01-23T05:41:29.078388399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:29.078739 kubelet[2784]: E0123 05:41:29.078640 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:41:29.079287 kubelet[2784]: E0123 05:41:29.078743 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 05:41:29.079287 kubelet[2784]: E0123 05:41:29.078926 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:29.081786 containerd[1597]: time="2026-01-23T05:41:29.081746947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 05:41:29.150105 containerd[1597]: time="2026-01-23T05:41:29.149738439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:29.151193 containerd[1597]: time="2026-01-23T05:41:29.151139555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 05:41:29.151303 containerd[1597]: time="2026-01-23T05:41:29.151258336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:29.152567 kubelet[2784]: E0123 05:41:29.151475 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:41:29.152858 kubelet[2784]: E0123 05:41:29.152837 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 05:41:29.153304 kubelet[2784]: E0123 05:41:29.153115 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gpdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rpddq_calico-system(5e85a0d3-5d32-4d8a-b91c-0a641948fd22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:29.154590 kubelet[2784]: E0123 05:41:29.154563 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpddq" podUID="5e85a0d3-5d32-4d8a-b91c-0a641948fd22" Jan 23 05:41:30.993156 kubelet[2784]: E0123 05:41:30.992802 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68f956777f-dhm59" podUID="9393f034-f86c-4875-9435-8f85b0225d78" Jan 23 05:41:31.486572 systemd[1]: Started sshd@28-10.0.0.10:22-10.0.0.1:33646.service - OpenSSH per-connection server daemon (10.0.0.1:33646). Jan 23 05:41:31.497822 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 05:41:31.497915 kernel: audit: type=1130 audit(1769146891.485:925): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.10:22-10.0.0.1:33646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:31.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.10:22-10.0.0.1:33646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:31.561000 audit[5524]: USER_ACCT pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.562958 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 33646 ssh2: RSA SHA256:bKxydkrXMpYqYZHZDTeIK9iSi8wUE6gcftH68QNmGRE Jan 23 05:41:31.564988 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 05:41:31.561000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.571570 systemd-logind[1569]: New session 30 of user core. Jan 23 05:41:31.582203 kernel: audit: type=1101 audit(1769146891.561:926): pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.582335 kernel: audit: type=1103 audit(1769146891.561:927): pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.582365 kernel: audit: type=1006 audit(1769146891.561:928): pid=5524 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 23 05:41:31.561000 audit[5524]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc31c7f740 a2=3 a3=0 items=0 ppid=1 pid=5524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:31.599908 kernel: audit: type=1300 audit(1769146891.561:928): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc31c7f740 a2=3 a3=0 items=0 ppid=1 pid=5524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 05:41:31.599977 kernel: audit: type=1327 audit(1769146891.561:928): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:31.561000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 05:41:31.606520 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 05:41:31.609000 audit[5524]: USER_START pid=5524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.609000 audit[5528]: CRED_ACQ pid=5528 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.627994 kernel: audit: type=1105 audit(1769146891.609:929): pid=5524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.628132 kernel: audit: type=1103 audit(1769146891.609:930): pid=5528 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.694625 sshd[5528]: Connection closed by 10.0.0.1 port 33646 Jan 23 05:41:31.695036 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Jan 23 05:41:31.696000 audit[5524]: USER_END pid=5524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.700913 systemd[1]: sshd@28-10.0.0.10:22-10.0.0.1:33646.service: Deactivated successfully. Jan 23 05:41:31.704882 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 05:41:31.707247 systemd-logind[1569]: Session 30 logged out. Waiting for processes to exit. Jan 23 05:41:31.708127 kernel: audit: type=1106 audit(1769146891.696:931): pid=5524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.696000 audit[5524]: CRED_DISP pid=5524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.709258 systemd-logind[1569]: Removed session 30. Jan 23 05:41:31.720118 kernel: audit: type=1104 audit(1769146891.696:932): pid=5524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 05:41:31.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.10:22-10.0.0.1:33646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 05:41:33.993301 containerd[1597]: time="2026-01-23T05:41:33.993119167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 05:41:34.052291 containerd[1597]: time="2026-01-23T05:41:34.052197474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:34.054031 containerd[1597]: time="2026-01-23T05:41:34.053875734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 05:41:34.054031 containerd[1597]: time="2026-01-23T05:41:34.053976362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:34.054213 kubelet[2784]: E0123 05:41:34.054178 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:41:34.055221 kubelet[2784]: E0123 05:41:34.054228 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 05:41:34.055221 kubelet[2784]: E0123 05:41:34.054463 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgb5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-56x2c_calico-system(f2986fef-16f6-4f5c-ada1-3406bc086cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:34.055400 containerd[1597]: time="2026-01-23T05:41:34.054551107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 05:41:34.055990 kubelet[2784]: E0123 05:41:34.055938 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-56x2c" podUID="f2986fef-16f6-4f5c-ada1-3406bc086cb8" Jan 23 05:41:34.112339 containerd[1597]: time="2026-01-23T05:41:34.112241241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 05:41:34.113769 containerd[1597]: time="2026-01-23T05:41:34.113664345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 05:41:34.113955 containerd[1597]: time="2026-01-23T05:41:34.113786844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 05:41:34.114103 kubelet[2784]: E0123 05:41:34.113946 2784 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:41:34.114103 kubelet[2784]: E0123 05:41:34.114096 2784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 05:41:34.114306 kubelet[2784]: E0123 05:41:34.114237 2784 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrpbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-fb784f8d9-hrfdf_calico-apiserver(3912831c-901b-4041-9115-637bb8679bc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 05:41:34.115679 kubelet[2784]: E0123 05:41:34.115643 2784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-fb784f8d9-hrfdf" podUID="3912831c-901b-4041-9115-637bb8679bc2"