Jan 20 02:15:13.979132 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:27:27 -00 2026 Jan 20 02:15:13.979187 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:15:13.979207 kernel: BIOS-provided physical RAM map: Jan 20 02:15:13.979217 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 02:15:13.979225 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 02:15:13.979233 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 02:15:13.979244 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 02:15:13.979256 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 02:15:13.979292 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 02:15:13.979302 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 02:15:13.979316 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 02:15:13.979325 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 02:15:13.979334 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 02:15:13.979343 kernel: NX (Execute Disable) protection: active Jan 20 02:15:13.979356 kernel: APIC: Static calls initialized Jan 20 02:15:13.979397 kernel: SMBIOS 2.8 present. Jan 20 02:15:13.979432 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 02:15:13.979442 kernel: DMI: Memory slots populated: 1/1 Jan 20 02:15:13.979451 kernel: Hypervisor detected: KVM Jan 20 02:15:13.979462 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:15:13.979474 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 02:15:13.979485 kernel: kvm-clock: using sched offset of 43018383096 cycles Jan 20 02:15:13.979496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 02:15:13.979506 kernel: tsc: Detected 2445.426 MHz processor Jan 20 02:15:13.979587 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 02:15:13.979600 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 02:15:13.979612 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 02:15:13.979624 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 02:15:13.979636 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 02:15:13.979647 kernel: Using GB pages for direct mapping Jan 20 02:15:13.979659 kernel: ACPI: Early table checksum verification disabled Jan 20 02:15:13.979676 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 02:15:13.979687 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979699 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979712 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979725 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 02:15:13.979735 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979745 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979760 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979807 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 02:15:13.979828 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 02:15:13.979840 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 02:15:13.979850 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 02:15:13.979865 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 02:15:13.979876 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 02:15:13.979886 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 02:15:13.979896 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 02:15:13.979907 kernel: No NUMA configuration found Jan 20 02:15:13.979920 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 02:15:13.979938 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 02:15:13.979949 kernel: Zone ranges: Jan 20 02:15:13.979959 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 02:15:13.979970 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 02:15:13.979980 kernel: Normal empty Jan 20 02:15:13.979990 kernel: Device empty Jan 20 02:15:13.980000 kernel: Movable zone start for each node Jan 20 02:15:13.980011 kernel: Early memory node ranges Jan 20 02:15:13.980028 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 02:15:13.980040 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 02:15:13.980052 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 02:15:13.980064 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 02:15:13.980076 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 02:15:13.980111 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 02:15:13.980123 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 02:15:13.980138 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 02:15:13.980151 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 02:15:13.980163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 02:15:13.980193 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 02:15:13.980206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 02:15:13.980218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 02:15:13.980230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 02:15:13.980247 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 02:15:13.980260 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 02:15:13.980272 kernel: TSC deadline timer available Jan 20 02:15:13.980282 kernel: CPU topo: Max. logical packages: 1 Jan 20 02:15:13.980292 kernel: CPU topo: Max. logical dies: 1 Jan 20 02:15:13.980302 kernel: CPU topo: Max. dies per package: 1 Jan 20 02:15:13.980312 kernel: CPU topo: Max. threads per core: 1 Jan 20 02:15:13.980322 kernel: CPU topo: Num. cores per package: 4 Jan 20 02:15:13.980337 kernel: CPU topo: Num. threads per package: 4 Jan 20 02:15:13.980349 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 02:15:13.980362 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 02:15:13.980376 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 02:15:13.980386 kernel: kvm-guest: setup PV sched yield Jan 20 02:15:13.980397 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 02:15:13.980407 kernel: Booting paravirtualized kernel on KVM Jan 20 02:15:13.980423 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 02:15:13.980433 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 02:15:13.980443 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 02:15:13.980455 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 02:15:13.980468 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 02:15:13.980479 kernel: kvm-guest: PV spinlocks enabled Jan 20 02:15:13.980490 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 02:15:13.980506 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:15:13.980567 kernel: random: crng init done Jan 20 02:15:13.980581 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 02:15:13.980594 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 02:15:13.980606 kernel: Fallback order for Node 0: 0 Jan 20 02:15:13.980618 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 02:15:13.980630 kernel: Policy zone: DMA32 Jan 20 02:15:13.980647 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 02:15:13.980659 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 02:15:13.980671 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 02:15:13.980683 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 02:15:13.980695 kernel: Dynamic Preempt: voluntary Jan 20 02:15:13.980708 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 02:15:13.980721 kernel: rcu: RCU event tracing is enabled. Jan 20 02:15:13.980737 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 02:15:13.980747 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 02:15:13.988982 kernel: Rude variant of Tasks RCU enabled. Jan 20 02:15:13.989019 kernel: Tracing variant of Tasks RCU enabled. Jan 20 02:15:13.989030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 02:15:13.989042 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 02:15:13.989053 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:15:13.989076 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:15:13.989087 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 02:15:13.989098 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 02:15:13.989110 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 02:15:13.989135 kernel: Console: colour VGA+ 80x25 Jan 20 02:15:13.989151 kernel: printk: legacy console [ttyS0] enabled Jan 20 02:15:13.989163 kernel: ACPI: Core revision 20240827 Jan 20 02:15:13.989175 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 02:15:13.989187 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 02:15:13.989205 kernel: x2apic enabled Jan 20 02:15:13.989216 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 02:15:13.989256 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 02:15:13.989269 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 02:15:13.989287 kernel: kvm-guest: setup PV IPIs Jan 20 02:15:13.989301 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 02:15:13.989314 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:15:13.989325 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 02:15:13.989339 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 02:15:13.989351 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 02:15:13.989388 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 02:15:13.989409 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 02:15:13.989420 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 02:15:13.989433 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 02:15:13.989447 kernel: Speculative Store Bypass: Vulnerable Jan 20 02:15:13.989458 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 02:15:13.989473 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 02:15:13.989487 kernel: active return thunk: srso_alias_return_thunk Jan 20 02:15:13.989504 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 02:15:13.989586 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 02:15:13.989605 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 02:15:13.989619 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 02:15:13.989630 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 02:15:13.989644 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 02:15:13.989658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 02:15:13.989676 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 02:15:13.989690 kernel: Freeing SMP alternatives memory: 32K Jan 20 02:15:13.989704 kernel: pid_max: default: 32768 minimum: 301 Jan 20 02:15:13.989716 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 02:15:13.989729 kernel: landlock: Up and running. Jan 20 02:15:13.989743 kernel: SELinux: Initializing. Jan 20 02:15:13.989755 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:15:13.989807 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 02:15:13.989842 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 02:15:13.989858 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 02:15:13.989870 kernel: signal: max sigframe size: 1776 Jan 20 02:15:13.989883 kernel: rcu: Hierarchical SRCU implementation. Jan 20 02:15:13.989896 kernel: rcu: Max phase no-delay instances is 400. Jan 20 02:15:13.989909 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 02:15:13.989927 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 02:15:13.989940 kernel: smp: Bringing up secondary CPUs ... Jan 20 02:15:13.989952 kernel: smpboot: x86: Booting SMP configuration: Jan 20 02:15:13.989964 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 02:15:13.989977 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 02:15:13.989990 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 02:15:13.990003 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120520K reserved, 0K cma-reserved) Jan 20 02:15:13.990023 kernel: devtmpfs: initialized Jan 20 02:15:13.990034 kernel: x86/mm: Memory block size: 128MB Jan 20 02:15:13.990046 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 02:15:13.990056 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 02:15:13.990067 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 02:15:13.990078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 02:15:13.990089 kernel: audit: initializing netlink subsys (disabled) Jan 20 02:15:13.990107 kernel: audit: type=2000 audit(1768875284.986:1): state=initialized audit_enabled=0 res=1 Jan 20 02:15:13.990121 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 02:15:13.990132 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 02:15:13.990143 kernel: cpuidle: using governor menu Jan 20 02:15:13.990154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 02:15:13.990165 kernel: dca service started, version 1.12.1 Jan 20 02:15:13.990176 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 02:15:13.990191 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 02:15:13.990204 kernel: PCI: Using configuration type 1 for base access Jan 20 02:15:13.990217 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 02:15:13.990229 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 02:15:13.990240 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 02:15:13.990252 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 02:15:13.990263 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 02:15:13.990280 kernel: ACPI: Added _OSI(Module Device) Jan 20 02:15:13.990293 kernel: ACPI: Added _OSI(Processor Device) Jan 20 02:15:13.990304 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 02:15:13.990316 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 02:15:13.990328 kernel: ACPI: Interpreter enabled Jan 20 02:15:13.990338 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 02:15:13.990350 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 02:15:13.990361 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 02:15:13.990376 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 02:15:13.990387 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 02:15:13.990398 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 02:15:13.999462 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 02:15:14.002181 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 02:15:14.002498 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 02:15:14.002576 kernel: PCI host bridge to bus 0000:00 Jan 20 02:15:14.010072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 02:15:14.010357 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 02:15:14.010706 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 02:15:14.011008 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 02:15:14.033117 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 02:15:14.047456 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 02:15:14.074636 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 02:15:14.075232 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 02:15:14.075594 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 02:15:14.075972 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 02:15:14.076258 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 02:15:14.076579 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 02:15:14.091475 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 02:15:14.098314 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 02:15:14.098709 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 02:15:14.099048 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 02:15:14.099329 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 02:15:14.099714 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 02:15:14.110847 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 02:15:14.111189 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 02:15:14.111497 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 02:15:14.112091 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 02:15:14.112389 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 02:15:14.112744 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 02:15:14.153993 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 02:15:14.154309 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 02:15:14.177477 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 02:15:14.193460 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 02:15:14.193906 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 12695 usecs Jan 20 02:15:14.194414 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 02:15:14.201446 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 02:15:14.201953 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 02:15:14.202284 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 02:15:14.202662 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 02:15:14.202684 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 02:15:14.202696 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 02:15:14.202708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 02:15:14.202721 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 02:15:14.202734 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 02:15:14.202755 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 02:15:14.202766 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 02:15:14.202825 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 02:15:14.202836 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 02:15:14.202848 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 02:15:14.202858 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 02:15:14.202871 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 02:15:14.202891 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 02:15:14.202902 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 02:15:14.202913 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 02:15:14.202924 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 02:15:14.202938 kernel: iommu: Default domain type: Translated Jan 20 02:15:14.202950 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 02:15:14.202961 kernel: PCI: Using ACPI for IRQ routing Jan 20 02:15:14.202976 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 02:15:14.202988 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 02:15:14.203002 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 02:15:14.203302 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 02:15:14.203651 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 02:15:14.203979 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 02:15:14.204004 kernel: vgaarb: loaded Jan 20 02:15:14.204024 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 02:15:14.204035 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 02:15:14.204047 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 02:15:14.204058 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 02:15:14.204071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 02:15:14.204082 kernel: pnp: PnP ACPI init Jan 20 02:15:14.212731 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 02:15:14.212825 kernel: pnp: PnP ACPI: found 6 devices Jan 20 02:15:14.212865 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 02:15:14.212881 kernel: NET: Registered PF_INET protocol family Jan 20 02:15:14.212895 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 02:15:14.212906 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 02:15:14.212917 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 02:15:14.212968 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 02:15:14.212981 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 02:15:14.212992 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 02:15:14.213003 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:15:14.213016 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 02:15:14.213029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 02:15:14.213044 kernel: NET: Registered PF_XDP protocol family Jan 20 02:15:14.213358 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 02:15:14.213726 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 02:15:14.214052 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 02:15:14.214332 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 02:15:14.214699 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 02:15:14.215021 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 02:15:14.215044 kernel: PCI: CLS 0 bytes, default 64 Jan 20 02:15:14.215064 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 02:15:14.215075 kernel: Initialise system trusted keyrings Jan 20 02:15:14.215088 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 02:15:14.215101 kernel: Key type asymmetric registered Jan 20 02:15:14.215115 kernel: Asymmetric key parser 'x509' registered Jan 20 02:15:14.215126 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 02:15:14.215137 kernel: io scheduler mq-deadline registered Jan 20 02:15:14.215153 kernel: io scheduler kyber registered Jan 20 02:15:14.215165 kernel: io scheduler bfq registered Jan 20 02:15:14.215180 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 02:15:14.215193 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 02:15:14.215204 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 02:15:14.215215 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 02:15:14.215227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 02:15:14.215247 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 02:15:14.215258 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 02:15:14.215269 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 02:15:14.215280 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 02:15:14.226967 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 02:15:14.227009 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 02:15:14.227296 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 02:15:14.227650 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T02:14:56 UTC (1768875296) Jan 20 02:15:14.237021 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 02:15:14.237058 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 02:15:14.237073 kernel: NET: Registered PF_INET6 protocol family Jan 20 02:15:14.237084 kernel: Segment Routing with IPv6 Jan 20 02:15:14.237096 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 02:15:14.237116 kernel: NET: Registered PF_PACKET protocol family Jan 20 02:15:14.237128 kernel: Key type dns_resolver registered Jan 20 02:15:14.237140 kernel: IPI shorthand broadcast: enabled Jan 20 02:15:14.237151 kernel: sched_clock: Marking stable (8077119204, 2793224078)->(12875329113, -2004985831) Jan 20 02:15:14.237164 kernel: registered taskstats version 1 Jan 20 02:15:14.237176 kernel: Loading compiled-in X.509 certificates Jan 20 02:15:14.237188 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 39f154fc6e329874bced8cdae9473f98b7dd3f43' Jan 20 02:15:14.237200 kernel: Demotion targets for Node 0: null Jan 20 02:15:14.237218 kernel: Key type .fscrypt registered Jan 20 02:15:14.237230 kernel: Key type fscrypt-provisioning registered Jan 20 02:15:14.237242 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 02:15:14.237255 kernel: ima: Allocated hash algorithm: sha1 Jan 20 02:15:14.237267 kernel: ima: No architecture policies found Jan 20 02:15:14.237280 kernel: clk: Disabling unused clocks Jan 20 02:15:14.237294 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 20 02:15:14.237313 kernel: Write protecting the kernel read-only data: 47104k Jan 20 02:15:14.237325 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 20 02:15:14.237336 kernel: Run /init as init process Jan 20 02:15:14.237348 kernel: with arguments: Jan 20 02:15:14.237359 kernel: /init Jan 20 02:15:14.237370 kernel: with environment: Jan 20 02:15:14.237382 kernel: HOME=/ Jan 20 02:15:14.237399 kernel: TERM=linux Jan 20 02:15:14.237410 kernel: SCSI subsystem initialized Jan 20 02:15:14.237421 kernel: libata version 3.00 loaded. Jan 20 02:15:14.238489 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 02:15:14.238574 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 02:15:14.238904 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 02:15:14.248245 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 02:15:14.248635 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 02:15:14.253411 kernel: scsi host0: ahci Jan 20 02:15:14.253955 kernel: scsi host1: ahci Jan 20 02:15:14.254280 kernel: scsi host2: ahci Jan 20 02:15:14.254801 kernel: scsi host3: ahci Jan 20 02:15:14.255156 kernel: scsi host4: ahci Jan 20 02:15:14.255500 kernel: scsi host5: ahci Jan 20 02:15:14.255568 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 20 02:15:14.255583 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 20 02:15:14.255595 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 20 02:15:14.255607 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 20 02:15:14.255625 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 20 02:15:14.255637 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 20 02:15:14.255648 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 02:15:14.255661 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 02:15:14.255674 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 02:15:14.255686 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 02:15:14.255698 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 02:15:14.255715 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 02:15:14.255727 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:15:14.255738 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 02:15:14.255750 kernel: ata3.00: applying bridge limits Jan 20 02:15:14.255765 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 02:15:14.255814 kernel: ata3.00: configured for UDMA/100 Jan 20 02:15:14.256139 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 02:15:14.256166 kernel: hrtimer: interrupt took 44198167 ns Jan 20 02:15:14.256472 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 02:15:14.270435 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 02:15:14.270482 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 02:15:14.270927 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 02:15:14.270971 kernel: GPT:16515071 != 27000831 Jan 20 02:15:14.270985 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 02:15:14.270997 kernel: GPT:16515071 != 27000831 Jan 20 02:15:14.271011 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 02:15:14.271026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 02:15:14.271038 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 02:15:14.271361 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 02:15:14.271383 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 02:15:14.271403 kernel: device-mapper: uevent: version 1.0.3 Jan 20 02:15:14.271417 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 02:15:14.271430 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 02:15:14.271443 kernel: raid6: avx2x4 gen() 10456 MB/s Jan 20 02:15:14.271456 kernel: raid6: avx2x2 gen() 10352 MB/s Jan 20 02:15:14.271468 kernel: raid6: avx2x1 gen() 6476 MB/s Jan 20 02:15:14.271482 kernel: raid6: using algorithm avx2x4 gen() 10456 MB/s Jan 20 02:15:14.271500 kernel: raid6: .... xor() 1899 MB/s, rmw enabled Jan 20 02:15:14.271513 kernel: raid6: using avx2x2 recovery algorithm Jan 20 02:15:14.271596 kernel: xor: automatically using best checksumming function avx Jan 20 02:15:14.271612 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 02:15:14.271636 kernel: BTRFS: device fsid 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Jan 20 02:15:14.271649 kernel: BTRFS info (device dm-0): first mount of filesystem 95a8358a-4aa8-4215-9cd3-5b140c6c0a16 Jan 20 02:15:14.271661 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:15:14.271672 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 02:15:14.271684 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 02:15:14.271698 kernel: loop: module loaded Jan 20 02:15:14.271711 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 02:15:14.271728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 02:15:14.271744 systemd[1]: Successfully made /usr/ read-only. Jan 20 02:15:14.271762 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:15:14.271817 systemd[1]: Detected virtualization kvm. Jan 20 02:15:14.271835 systemd[1]: Detected architecture x86-64. Jan 20 02:15:14.271847 systemd[1]: Running in initrd. Jan 20 02:15:14.271896 systemd[1]: No hostname configured, using default hostname. Jan 20 02:15:14.271911 systemd[1]: Hostname set to . Jan 20 02:15:14.271925 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:15:14.271940 systemd[1]: Queued start job for default target initrd.target. Jan 20 02:15:14.271953 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:15:14.271966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:15:14.271982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:15:14.271996 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 02:15:14.272012 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:15:14.272027 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 02:15:14.272042 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 02:15:14.272055 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:15:14.272072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:15:14.272084 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:15:14.272100 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:15:14.272114 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:15:14.272129 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:15:14.272141 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:15:14.272153 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:15:14.272170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:15:14.272185 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:15:14.272200 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 02:15:14.272215 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 02:15:14.272227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:15:14.272240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:15:14.272252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:15:14.272270 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:15:14.272286 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 02:15:14.272301 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 02:15:14.272313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:15:14.272324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 02:15:14.272337 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 02:15:14.272358 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 02:15:14.272371 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:15:14.272383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:15:14.272400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:15:14.272418 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 02:15:14.272433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:15:14.272445 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 02:15:14.272458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:15:14.272628 systemd-journald[322]: Collecting audit messages is enabled. Jan 20 02:15:14.272666 systemd-journald[322]: Journal started Jan 20 02:15:14.272694 systemd-journald[322]: Runtime Journal (/run/log/journal/24b55c020d704bf289c19e046358373f) is 6M, max 48.2M, 42.1M free. Jan 20 02:15:14.360886 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:15:14.363136 kernel: audit: type=1130 audit(1768875314.298:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:14.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:14.383224 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:15:15.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.153960 kernel: audit: type=1130 audit(1768875315.092:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.153607 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:15:15.161758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:15:15.439572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:15:15.628476 kernel: audit: type=1130 audit(1768875315.551:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.592255 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 02:15:15.646510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 02:15:15.815180 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 02:15:15.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.817909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:15:15.940087 kernel: audit: type=1130 audit(1768875315.839:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.940238 kernel: audit: type=1130 audit(1768875315.916:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:15.866738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:15:16.043583 kernel: Bridge firewalling registered Jan 20 02:15:16.049486 systemd-modules-load[323]: Inserted module 'br_netfilter' Jan 20 02:15:16.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:16.053913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:15:16.133379 kernel: audit: type=1130 audit(1768875316.052:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:16.138890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:15:16.241264 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:15:16.303032 kernel: audit: type=1130 audit(1768875316.262:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:16.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:16.314644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 02:15:16.397182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:15:16.465706 kernel: audit: type=1130 audit(1768875316.396:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:16.465744 kernel: audit: type=1334 audit(1768875316.400:10): prog-id=6 op=LOAD Jan 20 02:15:16.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:16.400000 audit: BPF prog-id=6 op=LOAD Jan 20 02:15:16.401734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:15:16.547622 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ffc050d3940163f278aec6799df208aabf8f27b8f3e958c63256c067960f0c44 Jan 20 02:15:16.857590 systemd-resolved[358]: Positive Trust Anchors: Jan 20 02:15:16.858088 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:15:16.863909 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:15:16.863963 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:15:17.007699 systemd-resolved[358]: Defaulting to hostname 'linux'. Jan 20 02:15:17.031374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:15:17.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:17.039028 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:15:17.070739 kernel: audit: type=1130 audit(1768875317.038:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:17.679340 kernel: Loading iSCSI transport class v2.0-870. Jan 20 02:15:17.876071 kernel: iscsi: registered transport (tcp) Jan 20 02:15:18.162226 kernel: iscsi: registered transport (qla4xxx) Jan 20 02:15:18.169247 kernel: QLogic iSCSI HBA Driver Jan 20 02:15:18.876963 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:15:19.115740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:15:19.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:19.208413 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:15:20.113898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 02:15:20.208140 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:15:20.208186 kernel: audit: type=1130 audit(1768875320.139:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:20.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:20.153906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 02:15:20.256116 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 02:15:20.533212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:15:20.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:20.589205 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:15:20.630591 kernel: audit: type=1130 audit(1768875320.562:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:20.630631 kernel: audit: type=1334 audit(1768875320.574:15): prog-id=7 op=LOAD Jan 20 02:15:20.630652 kernel: audit: type=1334 audit(1768875320.574:16): prog-id=8 op=LOAD Jan 20 02:15:20.574000 audit: BPF prog-id=7 op=LOAD Jan 20 02:15:20.574000 audit: BPF prog-id=8 op=LOAD Jan 20 02:15:20.790684 systemd-udevd[597]: Using default interface naming scheme 'v257'. Jan 20 02:15:21.018585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:15:21.135307 kernel: audit: type=1130 audit(1768875321.052:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:21.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:21.140317 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 02:15:21.739569 dracut-pre-trigger[644]: rd.md=0: removing MD RAID activation Jan 20 02:15:22.112274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:15:22.253794 kernel: audit: type=1130 audit(1768875322.130:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:22.253878 kernel: audit: type=1334 audit(1768875322.130:19): prog-id=9 op=LOAD Jan 20 02:15:22.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:22.130000 audit: BPF prog-id=9 op=LOAD Jan 20 02:15:22.159009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:15:22.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:22.269576 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:15:22.407127 kernel: audit: type=1130 audit(1768875322.298:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:22.364813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:15:22.990311 systemd-networkd[727]: lo: Link UP Jan 20 02:15:23.096232 kernel: audit: type=1130 audit(1768875323.045:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:23.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:22.990345 systemd-networkd[727]: lo: Gained carrier Jan 20 02:15:22.992040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:15:23.046197 systemd[1]: Reached target network.target - Network. Jan 20 02:15:23.371917 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:15:23.459306 kernel: audit: type=1130 audit(1768875323.391:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:23.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:23.438506 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 02:15:24.264602 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 02:15:24.627497 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 02:15:24.646587 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 02:15:24.717414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 02:15:24.786362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:15:24.871048 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 02:15:24.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:24.882108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:15:24.882250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:15:24.905100 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:15:24.988196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:15:25.193821 disk-uuid[773]: Primary Header is updated. Jan 20 02:15:25.193821 disk-uuid[773]: Secondary Entries is updated. Jan 20 02:15:25.193821 disk-uuid[773]: Secondary Header is updated. Jan 20 02:15:25.689325 kernel: AES CTR mode by8 optimization enabled Jan 20 02:15:25.995505 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 02:15:26.413601 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:15:27.068203 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:15:27.068268 kernel: audit: type=1130 audit(1768875327.024:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:26.413643 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:15:27.127429 kernel: audit: type=1130 audit(1768875327.087:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:26.450715 systemd-networkd[727]: eth0: Link UP Jan 20 02:15:26.502297 systemd-networkd[727]: eth0: Gained carrier Jan 20 02:15:27.201811 disk-uuid[774]: Warning: The kernel is still using the old partition table. Jan 20 02:15:27.201811 disk-uuid[774]: The new table will be used at the next reboot or after you Jan 20 02:15:27.201811 disk-uuid[774]: run partprobe(8) or kpartx(8) Jan 20 02:15:27.201811 disk-uuid[774]: The operation has completed successfully. Jan 20 02:15:26.502320 systemd-networkd[727]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:15:26.553106 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:15:26.990474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 02:15:27.046012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:15:27.479344 kernel: audit: type=1130 audit(1768875327.398:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.479395 kernel: audit: type=1131 audit(1768875327.400:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.104320 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:15:27.520228 kernel: audit: type=1130 audit(1768875327.491:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:27.150267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:15:27.165711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:15:27.212842 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 02:15:27.254164 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 02:15:27.268784 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 02:15:27.401787 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:15:27.519279 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 02:15:27.775162 systemd-networkd[727]: eth0: Gained IPv6LL Jan 20 02:15:28.003699 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 20 02:15:28.031306 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:15:28.031409 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:15:28.091079 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:15:28.091202 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:15:28.200859 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:15:28.231010 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 02:15:28.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:28.280139 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 02:15:28.309615 kernel: audit: type=1130 audit(1768875328.261:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:31.306472 ignition[882]: Ignition 2.24.0 Jan 20 02:15:31.306503 ignition[882]: Stage: fetch-offline Jan 20 02:15:31.314052 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:15:31.314092 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:15:31.314350 ignition[882]: parsed url from cmdline: "" Jan 20 02:15:31.314357 ignition[882]: no config URL provided Jan 20 02:15:31.314512 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 02:15:31.314587 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 20 02:15:31.314761 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 20 02:15:31.314770 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 02:15:31.447184 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 20 02:15:32.018282 ignition[882]: parsing config with SHA512: ee8e86664c98cce8e4fbe12217d5899e843dc99515cc686960636baebe75e0d8bbabaf2ccd723d61e300b8411cc2ad14dc1a65b3801320c15cf33472371d0b99 Jan 20 02:15:32.172989 unknown[882]: fetched base config from "system" Jan 20 02:15:32.173026 unknown[882]: fetched user config from "qemu" Jan 20 02:15:32.257823 ignition[882]: fetch-offline: fetch-offline passed Jan 20 02:15:32.258073 ignition[882]: Ignition finished successfully Jan 20 02:15:32.318475 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:15:32.402930 kernel: audit: type=1130 audit(1768875332.343:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:32.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:32.345848 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 02:15:32.379009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 02:15:32.658937 ignition[891]: Ignition 2.24.0 Jan 20 02:15:32.659008 ignition[891]: Stage: kargs Jan 20 02:15:32.659491 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:15:32.659512 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:15:32.677113 ignition[891]: kargs: kargs passed Jan 20 02:15:32.677235 ignition[891]: Ignition finished successfully Jan 20 02:15:32.766672 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 02:15:32.806902 kernel: audit: type=1130 audit(1768875332.777:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:32.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:32.794780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 02:15:34.348401 ignition[898]: Ignition 2.24.0 Jan 20 02:15:34.348420 ignition[898]: Stage: disks Jan 20 02:15:34.348742 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 20 02:15:34.348761 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:15:34.360120 ignition[898]: disks: disks passed Jan 20 02:15:34.360250 ignition[898]: Ignition finished successfully Jan 20 02:15:34.381263 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 02:15:34.554771 kernel: audit: type=1130 audit(1768875334.390:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:34.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:34.392748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 02:15:34.419294 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 02:15:34.450068 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:15:34.464200 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:15:34.469638 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:15:34.505430 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 02:15:34.917178 systemd-fsck[907]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 02:15:34.950118 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 02:15:35.007673 kernel: audit: type=1130 audit(1768875334.949:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:34.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:34.986196 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 02:15:36.031200 kernel: EXT4-fs (vda9): mounted filesystem 452c2147-bc43-4f48-ad5f-dc139dd95c0b r/w with ordered data mode. Quota mode: none. Jan 20 02:15:36.036410 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 02:15:36.054296 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 02:15:36.077501 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:15:36.135135 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 02:15:36.163799 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 02:15:36.163875 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 02:15:36.265387 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (916) Jan 20 02:15:36.265436 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:15:36.265457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:15:36.163919 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:15:36.286844 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 02:15:36.316628 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 02:15:36.380143 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:15:36.380234 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:15:36.393961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:15:37.301906 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 02:15:37.419350 kernel: audit: type=1130 audit(1768875337.339:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:37.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:37.343719 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 02:15:37.362870 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 02:15:37.607820 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 02:15:37.658426 kernel: BTRFS info (device vda6): last unmount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:15:37.998057 ignition[1014]: INFO : Ignition 2.24.0 Jan 20 02:15:37.998057 ignition[1014]: INFO : Stage: mount Jan 20 02:15:37.998057 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:15:37.998057 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:15:37.998057 ignition[1014]: INFO : mount: mount passed Jan 20 02:15:37.998057 ignition[1014]: INFO : Ignition finished successfully Jan 20 02:15:38.201448 kernel: audit: type=1130 audit(1768875338.000:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:38.202492 kernel: audit: type=1130 audit(1768875338.048:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:38.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:38.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:37.983164 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 02:15:38.014857 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 02:15:38.092311 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 02:15:38.269689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 02:15:38.362948 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Jan 20 02:15:38.383203 kernel: BTRFS info (device vda6): first mount of filesystem ad08584f-77ce-45c9-9cd1-daa815089251 Jan 20 02:15:38.383292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 02:15:38.447304 kernel: BTRFS info (device vda6): turning on async discard Jan 20 02:15:38.447392 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 02:15:38.460057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 02:15:39.273872 ignition[1044]: INFO : Ignition 2.24.0 Jan 20 02:15:39.273872 ignition[1044]: INFO : Stage: files Jan 20 02:15:39.273872 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:15:39.273872 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:15:39.308307 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Jan 20 02:15:39.308307 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 02:15:39.308307 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 02:15:39.347446 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 02:15:39.347446 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 02:15:39.347446 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 02:15:39.347446 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:15:39.347446 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 02:15:39.315282 unknown[1044]: wrote ssh authorized keys file for user: core Jan 20 02:15:39.603966 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:15:40.222447 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 02:15:40.409794 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:15:40.409794 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 02:15:40.409794 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:15:40.409794 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:15:40.409794 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:15:40.409794 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 02:15:40.925186 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 02:15:45.107412 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 02:15:45.107412 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 02:15:45.287290 ignition[1044]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 02:15:45.746127 ignition[1044]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:15:45.791997 ignition[1044]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 02:15:45.814752 ignition[1044]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 02:15:45.814752 ignition[1044]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 02:15:45.814752 ignition[1044]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 02:15:45.814752 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:15:45.814752 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 02:15:45.814752 ignition[1044]: INFO : files: files passed Jan 20 02:15:45.814752 ignition[1044]: INFO : Ignition finished successfully Jan 20 02:15:46.067754 kernel: audit: type=1130 audit(1768875345.960:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:45.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:45.817123 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 02:15:46.143846 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 02:15:46.238363 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 02:15:46.366780 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 02:15:46.386673 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 02:15:46.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:46.496404 kernel: audit: type=1130 audit(1768875346.450:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:46.496888 kernel: audit: type=1131 audit(1768875346.450:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:46.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:46.536170 initrd-setup-root-after-ignition[1075]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 02:15:46.581017 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:15:46.606191 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:15:46.606191 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 02:15:46.670786 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:15:46.718801 kernel: audit: type=1130 audit(1768875346.684:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:46.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:46.689763 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 02:15:46.760940 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 02:15:47.227972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 02:15:47.229387 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 02:15:47.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:47.302869 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 02:15:47.341769 kernel: audit: type=1130 audit(1768875347.296:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:47.341815 kernel: audit: type=1131 audit(1768875347.301:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:47.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:47.347701 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 02:15:47.436984 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 02:15:47.481043 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 02:15:47.745876 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:15:47.803181 kernel: audit: type=1130 audit(1768875347.755:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:47.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:47.783427 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 02:15:47.895015 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 02:15:47.903631 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:15:47.937593 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:15:47.998620 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 02:15:48.053344 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 02:15:48.053885 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 02:15:48.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:48.116355 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 02:15:48.301240 kernel: audit: type=1131 audit(1768875348.114:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:48.169203 systemd[1]: Stopped target basic.target - Basic System. Jan 20 02:15:48.187588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 02:15:48.300312 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 02:15:48.491330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 02:15:48.545625 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 02:15:48.605475 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 02:15:48.682323 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 02:15:48.771689 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 02:15:48.841400 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 02:15:48.885649 systemd[1]: Stopped target swap.target - Swaps. Jan 20 02:15:48.967163 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 02:15:49.000417 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 02:15:49.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:49.100685 kernel: audit: type=1131 audit(1768875349.071:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:49.141789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:15:49.206869 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:15:49.244032 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 02:15:49.249202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:15:49.293488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 02:15:49.293806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 02:15:49.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:49.419897 kernel: audit: type=1131 audit(1768875349.404:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:49.418485 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 02:15:49.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:49.418825 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 02:15:49.452764 systemd[1]: Stopped target paths.target - Path Units. Jan 20 02:15:49.465977 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 02:15:49.474664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:15:49.518819 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 02:15:49.548435 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 02:15:49.599589 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 02:15:49.599891 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 02:15:49.737427 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 02:15:49.741382 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 02:15:49.779236 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 02:15:49.782653 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:15:49.880751 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 02:15:49.883076 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 02:15:49.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:49.993032 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 02:15:50.002290 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 02:15:50.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.115216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 02:15:50.183665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 02:15:50.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.206780 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 02:15:50.207137 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:15:50.243613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 02:15:50.243814 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:15:50.279401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 02:15:50.279666 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 02:15:50.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.413290 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 02:15:50.415425 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 02:15:50.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.718870 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 02:15:50.778187 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 02:15:50.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:50.784766 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 02:15:50.844950 ignition[1101]: INFO : Ignition 2.24.0 Jan 20 02:15:50.844950 ignition[1101]: INFO : Stage: umount Jan 20 02:15:50.901473 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 02:15:50.901473 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 02:15:51.024496 ignition[1101]: INFO : umount: umount passed Jan 20 02:15:51.024496 ignition[1101]: INFO : Ignition finished successfully Jan 20 02:15:51.064442 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 02:15:51.064792 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 02:15:51.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.139362 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 20 02:15:51.139433 kernel: audit: type=1131 audit(1768875351.131:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.139025 systemd[1]: Stopped target network.target - Network. Jan 20 02:15:51.209962 kernel: audit: type=1131 audit(1768875351.175:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.169593 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 02:15:51.265310 kernel: audit: type=1131 audit(1768875351.216:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.170012 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 02:15:51.181296 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 02:15:51.181407 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 02:15:51.241880 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 02:15:51.243233 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 02:15:51.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.352925 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 02:15:51.417060 kernel: audit: type=1131 audit(1768875351.347:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.353143 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 02:15:51.509367 kernel: audit: type=1131 audit(1768875351.454:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.465210 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 02:15:51.613284 kernel: audit: type=1131 audit(1768875351.522:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.465386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 02:15:51.583966 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 02:15:51.665962 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 02:15:51.712766 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 02:15:51.757276 kernel: audit: type=1131 audit(1768875351.713:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.712968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 02:15:51.772149 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 02:15:51.772387 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 02:15:51.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.834284 kernel: audit: type=1131 audit(1768875351.804:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.836000 audit: BPF prog-id=9 op=UNLOAD Jan 20 02:15:51.841097 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 02:15:52.049448 kernel: audit: type=1334 audit(1768875351.836:64): prog-id=9 op=UNLOAD Jan 20 02:15:52.049489 kernel: audit: type=1334 audit(1768875351.847:65): prog-id=6 op=UNLOAD Jan 20 02:15:51.847000 audit: BPF prog-id=6 op=UNLOAD Jan 20 02:15:51.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.858204 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 02:15:51.858318 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:15:51.884057 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 02:15:51.898394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 02:15:51.898596 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 02:15:52.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:51.934829 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 02:15:51.934938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:15:51.958435 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 02:15:51.958600 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 02:15:51.982284 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:15:52.150855 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 02:15:52.154591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:15:52.348877 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 02:15:52.348998 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 02:15:52.392019 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 02:15:52.392218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:15:52.467716 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 02:15:52.467901 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 02:15:52.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:52.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:52.602314 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 02:15:52.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:52.602435 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 02:15:52.610112 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 02:15:52.610250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 02:15:52.669471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 02:15:52.763867 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 02:15:52.818713 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:15:52.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:52.945253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 02:15:52.945486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:15:52.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:52.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:53.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:53.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:52.995773 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 02:15:52.995889 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:15:53.000574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 02:15:53.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:53.000677 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:15:53.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:53.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:15:53.004588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 02:15:53.004683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:15:53.157822 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 02:15:53.161595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 02:15:53.216355 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 02:15:53.222052 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 02:15:53.353395 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 02:15:53.611936 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 02:15:53.971421 systemd[1]: Switching root. Jan 20 02:15:54.188789 systemd-journald[322]: Received SIGTERM from PID 1 (systemd). Jan 20 02:15:54.189066 systemd-journald[322]: Journal stopped Jan 20 02:16:08.282887 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 02:16:08.283020 kernel: SELinux: policy capability open_perms=1 Jan 20 02:16:08.283043 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 02:16:08.283078 kernel: SELinux: policy capability always_check_network=0 Jan 20 02:16:08.283097 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 02:16:08.283117 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 02:16:08.283135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 02:16:08.283159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 02:16:08.283187 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 02:16:08.283210 systemd[1]: Successfully loaded SELinux policy in 481.478ms. Jan 20 02:16:08.283246 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 76.261ms. Jan 20 02:16:08.283266 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 02:16:08.283285 systemd[1]: Detected virtualization kvm. Jan 20 02:16:08.283346 systemd[1]: Detected architecture x86-64. Jan 20 02:16:08.283366 systemd[1]: Detected first boot. Jan 20 02:16:08.283389 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 02:16:08.283408 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 20 02:16:08.283436 kernel: audit: type=1334 audit(1768875356.266:83): prog-id=10 op=LOAD Jan 20 02:16:08.283455 kernel: audit: type=1334 audit(1768875356.266:84): prog-id=10 op=UNLOAD Jan 20 02:16:08.283472 kernel: audit: type=1334 audit(1768875356.266:85): prog-id=11 op=LOAD Jan 20 02:16:08.283489 kernel: audit: type=1334 audit(1768875356.266:86): prog-id=11 op=UNLOAD Jan 20 02:16:08.283514 zram_generator::config[1146]: No configuration found. Jan 20 02:16:08.283605 kernel: Guest personality initialized and is inactive Jan 20 02:16:08.283624 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 02:16:08.283644 kernel: Initialized host personality Jan 20 02:16:08.283662 kernel: NET: Registered PF_VSOCK protocol family Jan 20 02:16:08.283680 systemd[1]: Populated /etc with preset unit settings. Jan 20 02:16:08.283698 kernel: audit: type=1334 audit(1768875363.331:87): prog-id=12 op=LOAD Jan 20 02:16:08.283715 kernel: audit: type=1334 audit(1768875363.331:88): prog-id=3 op=UNLOAD Jan 20 02:16:08.283736 kernel: audit: type=1334 audit(1768875363.331:89): prog-id=13 op=LOAD Jan 20 02:16:08.283755 kernel: audit: type=1334 audit(1768875363.331:90): prog-id=14 op=LOAD Jan 20 02:16:08.283772 kernel: audit: type=1334 audit(1768875363.331:91): prog-id=4 op=UNLOAD Jan 20 02:16:08.283788 kernel: audit: type=1334 audit(1768875363.331:92): prog-id=5 op=UNLOAD Jan 20 02:16:08.283810 kernel: audit: type=1131 audit(1768875363.349:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.283828 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 02:16:08.283849 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 02:16:08.283871 kernel: audit: type=1130 audit(1768875363.433:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.283889 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 02:16:08.283907 kernel: audit: type=1131 audit(1768875363.433:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.283922 kernel: audit: type=1334 audit(1768875363.546:96): prog-id=12 op=UNLOAD Jan 20 02:16:08.283953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 02:16:08.283977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 02:16:08.283995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 02:16:08.284015 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 02:16:08.284034 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 02:16:08.284054 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 02:16:08.284074 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 02:16:08.284092 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 02:16:08.284117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 02:16:08.284139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 02:16:08.284159 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 02:16:08.284179 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 02:16:08.284206 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 02:16:08.284230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 02:16:08.284257 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 02:16:08.284278 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 02:16:08.284344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 02:16:08.284365 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 02:16:08.284383 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 02:16:08.284400 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 02:16:08.284418 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 02:16:08.284438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 02:16:08.284463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 02:16:08.284483 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 02:16:08.284504 systemd[1]: Reached target slices.target - Slice Units. Jan 20 02:16:08.284591 systemd[1]: Reached target swap.target - Swaps. Jan 20 02:16:08.284618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 02:16:08.284641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 02:16:08.284662 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 02:16:08.284686 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 02:16:08.284704 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 02:16:08.284721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 02:16:08.284740 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 02:16:08.284763 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 02:16:08.284781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 02:16:08.284798 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 02:16:08.284816 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 02:16:08.284838 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 02:16:08.284859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 02:16:08.284880 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 02:16:08.284899 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:16:08.284917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 02:16:08.284934 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 02:16:08.284956 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 02:16:08.284978 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 02:16:08.285001 systemd[1]: Reached target machines.target - Containers. Jan 20 02:16:08.285018 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 02:16:08.285035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:16:08.285053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 02:16:08.285074 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 02:16:08.285096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:16:08.285114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:16:08.285131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:16:08.285148 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 02:16:08.285168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:16:08.285189 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 02:16:08.285206 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 02:16:08.285228 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 02:16:08.285257 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 02:16:08.285282 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 02:16:08.303587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:16:08.303633 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 02:16:08.303654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 02:16:08.303673 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 02:16:08.303699 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 02:16:08.303722 kernel: fuse: init (API version 7.41) Jan 20 02:16:08.303747 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1435594238 wd_nsec: 1435593475 Jan 20 02:16:08.303766 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 02:16:08.303785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 02:16:08.303804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:16:08.303827 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 02:16:08.303852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 02:16:08.303870 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 02:16:08.303888 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 02:16:08.303906 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 02:16:08.305401 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 02:16:08.305488 systemd-journald[1230]: Collecting audit messages is enabled. Jan 20 02:16:08.305603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 02:16:08.305629 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 02:16:08.305655 systemd-journald[1230]: Journal started Jan 20 02:16:08.305685 systemd-journald[1230]: Runtime Journal (/run/log/journal/24b55c020d704bf289c19e046358373f) is 6M, max 48.2M, 42.1M free. Jan 20 02:16:04.566000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 02:16:05.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:05.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:05.989000 audit: BPF prog-id=14 op=UNLOAD Jan 20 02:16:05.989000 audit: BPF prog-id=13 op=UNLOAD Jan 20 02:16:06.000000 audit: BPF prog-id=15 op=LOAD Jan 20 02:16:06.076000 audit: BPF prog-id=16 op=LOAD Jan 20 02:16:06.076000 audit: BPF prog-id=17 op=LOAD Jan 20 02:16:08.262000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 02:16:08.262000 audit[1230]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffca507bcd0 a2=4000 a3=0 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:16:08.262000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 02:16:08.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:03.285507 systemd[1]: Queued start job for default target multi-user.target. Jan 20 02:16:03.339629 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 02:16:03.343498 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 02:16:03.349981 systemd[1]: systemd-journald.service: Consumed 2.588s CPU time. Jan 20 02:16:08.457071 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 02:16:08.457199 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 02:16:08.457238 kernel: audit: type=1130 audit(1768875368.363:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.579259 kernel: audit: type=1130 audit(1768875368.480:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.497419 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 02:16:08.497878 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 02:16:08.586987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:16:08.656704 kernel: audit: type=1130 audit(1768875368.585:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.857629 kernel: audit: type=1131 audit(1768875368.585:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.903927 kernel: audit: type=1130 audit(1768875368.783:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.904094 kernel: audit: type=1131 audit(1768875368.784:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.676830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:16:08.911232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:16:08.919140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:16:08.939460 kernel: ACPI: bus type drm_connector registered Jan 20 02:16:08.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:08.959802 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:16:08.960183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:16:08.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.008453 kernel: audit: type=1130 audit(1768875368.957:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.008672 kernel: audit: type=1131 audit(1768875368.957:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.089241 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 02:16:09.089780 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 02:16:09.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.124836 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:16:09.125228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:16:09.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.244033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 02:16:09.247416 kernel: audit: type=1130 audit(1768875369.081:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.247473 kernel: audit: type=1131 audit(1768875369.081:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.283281 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 02:16:09.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.348296 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 02:16:09.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.417673 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 02:16:09.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.532187 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 02:16:09.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:09.772953 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 02:16:09.801390 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 02:16:09.840820 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 02:16:09.857839 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 02:16:09.878685 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 02:16:09.878796 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 02:16:09.894099 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 02:16:09.911192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:16:09.911482 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:16:09.940901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 02:16:09.957231 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 02:16:09.974269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:16:10.006773 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 02:16:10.025894 systemd-journald[1230]: Time spent on flushing to /var/log/journal/24b55c020d704bf289c19e046358373f is 290.362ms for 1155 entries. Jan 20 02:16:10.025894 systemd-journald[1230]: System Journal (/var/log/journal/24b55c020d704bf289c19e046358373f) is 8M, max 163.5M, 155.5M free. Jan 20 02:16:10.438614 systemd-journald[1230]: Received client request to flush runtime journal. Jan 20 02:16:10.438718 kernel: loop1: detected capacity change from 0 to 111560 Jan 20 02:16:10.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:10.049010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:16:10.055856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 02:16:10.104582 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 02:16:10.248781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 02:16:10.308743 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 02:16:10.352885 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 02:16:10.398192 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 02:16:10.433178 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 02:16:10.473212 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 02:16:10.508151 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 02:16:10.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:10.593684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 02:16:10.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:10.738506 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 02:16:10.738215 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 02:16:10.756963 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 02:16:10.771251 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 20 02:16:10.773159 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 20 02:16:10.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:10.805311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 02:16:10.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:10.855094 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 02:16:10.881645 kernel: loop3: detected capacity change from 0 to 219144 Jan 20 02:16:11.574981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 02:16:11.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:11.644000 audit: BPF prog-id=18 op=LOAD Jan 20 02:16:11.644000 audit: BPF prog-id=19 op=LOAD Jan 20 02:16:11.644000 audit: BPF prog-id=20 op=LOAD Jan 20 02:16:11.653786 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 02:16:11.685000 audit: BPF prog-id=21 op=LOAD Jan 20 02:16:11.693854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 02:16:11.746807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 02:16:12.068000 audit: BPF prog-id=22 op=LOAD Jan 20 02:16:12.068000 audit: BPF prog-id=23 op=LOAD Jan 20 02:16:12.068000 audit: BPF prog-id=24 op=LOAD Jan 20 02:16:12.133620 kernel: loop4: detected capacity change from 0 to 111560 Jan 20 02:16:12.169302 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 02:16:12.262000 audit: BPF prog-id=25 op=LOAD Jan 20 02:16:12.476000 audit: BPF prog-id=26 op=LOAD Jan 20 02:16:12.476000 audit: BPF prog-id=27 op=LOAD Jan 20 02:16:12.706222 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 02:16:13.362620 kernel: loop5: detected capacity change from 0 to 50784 Jan 20 02:16:13.365099 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 20 02:16:13.365128 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Jan 20 02:16:13.412777 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 02:16:13.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:13.476181 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 20 02:16:13.476265 kernel: audit: type=1130 audit(1768875373.468:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:13.572612 kernel: loop6: detected capacity change from 0 to 219144 Jan 20 02:16:13.996864 (sd-merge)[1294]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 02:16:14.052832 (sd-merge)[1294]: Merged extensions into '/usr'. Jan 20 02:16:14.093066 systemd[1]: Reload requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 02:16:14.093608 systemd[1]: Reloading... Jan 20 02:16:14.187874 systemd-nsresourced[1295]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 02:16:14.971557 zram_generator::config[1340]: No configuration found. Jan 20 02:16:15.568167 systemd-resolved[1292]: Positive Trust Anchors: Jan 20 02:16:15.568190 systemd-resolved[1292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 02:16:15.568197 systemd-resolved[1292]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 02:16:15.568243 systemd-resolved[1292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 02:16:15.602819 systemd-resolved[1292]: Defaulting to hostname 'linux'. Jan 20 02:16:15.667624 systemd-oomd[1291]: No swap; memory pressure usage will be degraded Jan 20 02:16:15.999758 systemd[1]: Reloading finished in 1905 ms. Jan 20 02:16:16.074128 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 02:16:16.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.093016 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 02:16:16.106625 kernel: audit: type=1130 audit(1768875376.087:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.123765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 02:16:16.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.166202 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 02:16:16.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.178895 kernel: audit: type=1130 audit(1768875376.122:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.178996 kernel: audit: type=1130 audit(1768875376.164:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.199579 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 02:16:16.203966 kernel: audit: type=1130 audit(1768875376.198:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.264405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 02:16:16.298704 kernel: audit: type=1130 audit(1768875376.227:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.316116 systemd[1]: Starting ensure-sysext.service... Jan 20 02:16:16.343981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 02:16:16.392000 audit: BPF prog-id=28 op=LOAD Jan 20 02:16:16.403753 kernel: audit: type=1334 audit(1768875376.392:148): prog-id=28 op=LOAD Jan 20 02:16:16.404000 audit: BPF prog-id=15 op=UNLOAD Jan 20 02:16:16.404000 audit: BPF prog-id=29 op=LOAD Jan 20 02:16:16.429595 kernel: audit: type=1334 audit(1768875376.404:149): prog-id=15 op=UNLOAD Jan 20 02:16:16.429670 kernel: audit: type=1334 audit(1768875376.404:150): prog-id=29 op=LOAD Jan 20 02:16:16.404000 audit: BPF prog-id=30 op=LOAD Jan 20 02:16:16.466903 kernel: audit: type=1334 audit(1768875376.404:151): prog-id=30 op=LOAD Jan 20 02:16:16.404000 audit: BPF prog-id=16 op=UNLOAD Jan 20 02:16:16.404000 audit: BPF prog-id=17 op=UNLOAD Jan 20 02:16:16.415000 audit: BPF prog-id=31 op=LOAD Jan 20 02:16:16.415000 audit: BPF prog-id=25 op=UNLOAD Jan 20 02:16:16.415000 audit: BPF prog-id=32 op=LOAD Jan 20 02:16:16.417000 audit: BPF prog-id=33 op=LOAD Jan 20 02:16:16.420000 audit: BPF prog-id=26 op=UNLOAD Jan 20 02:16:16.420000 audit: BPF prog-id=27 op=UNLOAD Jan 20 02:16:16.427000 audit: BPF prog-id=34 op=LOAD Jan 20 02:16:16.427000 audit: BPF prog-id=21 op=UNLOAD Jan 20 02:16:16.434000 audit: BPF prog-id=35 op=LOAD Jan 20 02:16:16.434000 audit: BPF prog-id=18 op=UNLOAD Jan 20 02:16:16.439000 audit: BPF prog-id=36 op=LOAD Jan 20 02:16:16.439000 audit: BPF prog-id=37 op=LOAD Jan 20 02:16:16.439000 audit: BPF prog-id=19 op=UNLOAD Jan 20 02:16:16.439000 audit: BPF prog-id=20 op=UNLOAD Jan 20 02:16:16.460000 audit: BPF prog-id=38 op=LOAD Jan 20 02:16:16.460000 audit: BPF prog-id=22 op=UNLOAD Jan 20 02:16:16.460000 audit: BPF prog-id=39 op=LOAD Jan 20 02:16:16.464000 audit: BPF prog-id=40 op=LOAD Jan 20 02:16:16.464000 audit: BPF prog-id=23 op=UNLOAD Jan 20 02:16:16.464000 audit: BPF prog-id=24 op=UNLOAD Jan 20 02:16:16.509567 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 02:16:16.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:16.549896 systemd[1]: Reload requested from client PID 1376 ('systemctl') (unit ensure-sysext.service)... Jan 20 02:16:16.550047 systemd[1]: Reloading... Jan 20 02:16:16.560963 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:16:16.561314 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:16:16.562648 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:16:16.571159 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 20 02:16:16.572993 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 20 02:16:16.628233 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:16:16.628257 systemd-tmpfiles[1377]: Skipping /boot Jan 20 02:16:16.727870 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:16:16.728629 systemd-tmpfiles[1377]: Skipping /boot Jan 20 02:16:16.921300 zram_generator::config[1405]: No configuration found. Jan 20 02:16:18.257222 systemd[1]: Reloading finished in 1702 ms. Jan 20 02:16:18.685000 audit: BPF prog-id=41 op=LOAD Jan 20 02:16:18.702142 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 02:16:18.709653 kernel: audit: type=1334 audit(1768875378.685:175): prog-id=41 op=LOAD Jan 20 02:16:18.687000 audit: BPF prog-id=38 op=UNLOAD Jan 20 02:16:18.736926 kernel: audit: type=1334 audit(1768875378.687:176): prog-id=38 op=UNLOAD Jan 20 02:16:18.737003 kernel: audit: type=1334 audit(1768875378.692:177): prog-id=42 op=LOAD Jan 20 02:16:18.737047 kernel: audit: type=1334 audit(1768875378.692:178): prog-id=43 op=LOAD Jan 20 02:16:18.737069 kernel: audit: type=1334 audit(1768875378.692:179): prog-id=39 op=UNLOAD Jan 20 02:16:18.737092 kernel: audit: type=1334 audit(1768875378.692:180): prog-id=40 op=UNLOAD Jan 20 02:16:18.737110 kernel: audit: type=1334 audit(1768875378.693:181): prog-id=44 op=LOAD Jan 20 02:16:18.737154 kernel: audit: type=1334 audit(1768875378.707:182): prog-id=31 op=UNLOAD Jan 20 02:16:18.737174 kernel: audit: type=1334 audit(1768875378.708:183): prog-id=45 op=LOAD Jan 20 02:16:18.737195 kernel: audit: type=1334 audit(1768875378.708:184): prog-id=46 op=LOAD Jan 20 02:16:18.692000 audit: BPF prog-id=42 op=LOAD Jan 20 02:16:18.692000 audit: BPF prog-id=43 op=LOAD Jan 20 02:16:18.692000 audit: BPF prog-id=39 op=UNLOAD Jan 20 02:16:18.692000 audit: BPF prog-id=40 op=UNLOAD Jan 20 02:16:18.693000 audit: BPF prog-id=44 op=LOAD Jan 20 02:16:18.707000 audit: BPF prog-id=31 op=UNLOAD Jan 20 02:16:18.708000 audit: BPF prog-id=45 op=LOAD Jan 20 02:16:18.708000 audit: BPF prog-id=46 op=LOAD Jan 20 02:16:18.708000 audit: BPF prog-id=32 op=UNLOAD Jan 20 02:16:18.708000 audit: BPF prog-id=33 op=UNLOAD Jan 20 02:16:18.733000 audit: BPF prog-id=47 op=LOAD Jan 20 02:16:18.734000 audit: BPF prog-id=28 op=UNLOAD Jan 20 02:16:18.737000 audit: BPF prog-id=48 op=LOAD Jan 20 02:16:18.739000 audit: BPF prog-id=49 op=LOAD Jan 20 02:16:18.741000 audit: BPF prog-id=29 op=UNLOAD Jan 20 02:16:18.743000 audit: BPF prog-id=30 op=UNLOAD Jan 20 02:16:18.759000 audit: BPF prog-id=50 op=LOAD Jan 20 02:16:18.772000 audit: BPF prog-id=34 op=UNLOAD Jan 20 02:16:18.779000 audit: BPF prog-id=51 op=LOAD Jan 20 02:16:18.788000 audit: BPF prog-id=35 op=UNLOAD Jan 20 02:16:18.788000 audit: BPF prog-id=52 op=LOAD Jan 20 02:16:18.788000 audit: BPF prog-id=53 op=LOAD Jan 20 02:16:18.788000 audit: BPF prog-id=36 op=UNLOAD Jan 20 02:16:18.788000 audit: BPF prog-id=37 op=UNLOAD Jan 20 02:16:19.138939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 02:16:19.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:19.233107 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:16:19.300231 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 02:16:19.375698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 02:16:19.478591 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 02:16:19.517000 audit: BPF prog-id=8 op=UNLOAD Jan 20 02:16:19.517000 audit: BPF prog-id=7 op=UNLOAD Jan 20 02:16:19.532000 audit: BPF prog-id=54 op=LOAD Jan 20 02:16:19.544000 audit: BPF prog-id=55 op=LOAD Jan 20 02:16:19.661329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 02:16:19.707329 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 02:16:19.880161 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:16:19.880482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:16:19.903151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:16:19.919000 audit[1461]: SYSTEM_BOOT pid=1461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 02:16:19.954863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:16:20.008251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:16:20.021208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:16:20.021768 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:16:20.021903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:16:20.022027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:16:20.070114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:16:20.074998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:16:20.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:20.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:20.119461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:16:20.119840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:16:20.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:20.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:16:20.148000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 02:16:20.148000 audit[1475]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4d15dba0 a2=420 a3=0 items=0 ppid=1447 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:16:20.148000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:16:20.153967 augenrules[1475]: No rules Jan 20 02:16:20.144766 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:16:20.145100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:16:20.172108 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:16:20.187348 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:16:20.305178 systemd-udevd[1459]: Using default interface naming scheme 'v257'. Jan 20 02:16:20.336898 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 02:16:20.427338 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 02:16:20.471393 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:16:20.488004 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:16:20.498505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 02:16:20.508880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 02:16:20.696647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 02:16:20.783361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 02:16:20.836029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 02:16:20.850684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 02:16:20.851016 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 02:16:20.851152 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 02:16:20.851314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 02:16:20.864562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 02:16:20.888977 augenrules[1486]: /sbin/augenrules: No change Jan 20 02:16:21.101751 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 02:16:21.192276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 02:16:21.194116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 02:16:21.215272 augenrules[1521]: No rules Jan 20 02:16:21.209000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 02:16:21.209000 audit[1521]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe80de6560 a2=420 a3=0 items=0 ppid=1486 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:16:21.209000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:16:21.214000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 02:16:21.214000 audit[1521]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe80de89f0 a2=420 a3=0 items=0 ppid=1486 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:16:21.214000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:16:21.218315 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:16:21.219186 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:16:21.242268 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 02:16:21.244108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 02:16:21.269950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 02:16:21.270726 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 02:16:21.282666 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 02:16:21.284647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 02:16:21.369686 systemd[1]: Finished ensure-sysext.service. Jan 20 02:16:21.449876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 02:16:21.465910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 02:16:21.467459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 02:16:21.551412 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 02:16:21.584167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 02:16:21.597144 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 02:16:23.003390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 02:16:23.111711 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 02:16:24.005281 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 02:16:24.346942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 02:16:24.415861 kernel: ACPI: button: Power Button [PWRF] Jan 20 02:16:24.707072 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 02:16:24.850098 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 02:16:25.146310 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 02:16:25.589610 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 02:16:25.612372 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 02:16:25.652271 systemd-networkd[1538]: lo: Link UP Jan 20 02:16:25.652286 systemd-networkd[1538]: lo: Gained carrier Jan 20 02:16:25.706290 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 02:16:25.742799 systemd[1]: Reached target network.target - Network. Jan 20 02:16:25.802430 systemd-networkd[1538]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:16:25.802441 systemd-networkd[1538]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 02:16:25.981202 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 02:16:26.031999 systemd-networkd[1538]: eth0: Link UP Jan 20 02:16:26.043867 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 02:16:26.063601 systemd-networkd[1538]: eth0: Gained carrier Jan 20 02:16:26.063793 systemd-networkd[1538]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 02:16:26.273754 systemd-networkd[1538]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 02:16:26.283946 systemd-timesyncd[1541]: Network configuration changed, trying to establish connection. Jan 20 02:16:26.299920 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 02:16:26.300106 systemd-timesyncd[1541]: Initial clock synchronization to Tue 2026-01-20 02:16:26.524486 UTC. Jan 20 02:16:27.477479 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 02:16:27.615855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 02:16:28.016701 systemd-networkd[1538]: eth0: Gained IPv6LL Jan 20 02:16:28.052813 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 02:16:28.067609 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 02:16:28.562660 ldconfig[1449]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 02:16:28.582721 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 02:16:28.879440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 02:16:28.907160 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 02:16:29.073270 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 02:16:29.093706 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 02:16:29.135349 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 02:16:29.172147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 02:16:29.193673 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 02:16:29.211018 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 02:16:29.219701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 02:16:29.240283 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 02:16:29.260289 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 02:16:29.279921 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 02:16:29.302105 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 02:16:29.302173 systemd[1]: Reached target paths.target - Path Units. Jan 20 02:16:29.324178 systemd[1]: Reached target timers.target - Timer Units. Jan 20 02:16:29.353821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 02:16:29.377440 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 02:16:29.402242 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 02:16:29.420158 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 02:16:29.427972 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 02:16:29.464341 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 02:16:29.470384 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 02:16:29.490958 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 02:16:29.503039 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 02:16:29.522689 systemd[1]: Reached target basic.target - Basic System. Jan 20 02:16:29.530174 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:16:29.530223 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 02:16:29.539624 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 02:16:29.565049 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 02:16:29.607326 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 02:16:29.640831 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 02:16:29.678725 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 02:16:29.717290 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 02:16:29.751770 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 02:16:29.757863 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 02:16:29.810966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:16:29.829980 jq[1590]: false Jan 20 02:16:29.858131 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 02:16:29.877848 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 02:16:29.882378 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 02:16:29.945104 extend-filesystems[1591]: Found /dev/vda6 Jan 20 02:16:29.967393 extend-filesystems[1591]: Found /dev/vda9 Jan 20 02:16:29.977617 extend-filesystems[1591]: Checking size of /dev/vda9 Jan 20 02:16:29.988508 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jan 20 02:16:29.987495 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jan 20 02:16:30.015514 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Jan 20 02:16:30.015514 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:16:30.015514 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Jan 20 02:16:30.015337 oslogin_cache_refresh[1592]: Failure getting users, quitting Jan 20 02:16:30.015379 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 02:16:30.015486 oslogin_cache_refresh[1592]: Refreshing group entry cache Jan 20 02:16:30.028776 extend-filesystems[1591]: Resized partition /dev/vda9 Jan 20 02:16:30.045023 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 02:16:30.068037 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 02:16:30.042950 oslogin_cache_refresh[1592]: Failure getting groups, quitting Jan 20 02:16:30.068298 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Jan 20 02:16:30.068298 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:16:30.042974 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 02:16:30.128012 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 02:16:30.169334 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 02:16:30.173928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 02:16:30.242525 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 02:16:30.242525 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 02:16:30.242525 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 02:16:30.256914 extend-filesystems[1591]: Resized filesystem in /dev/vda9 Jan 20 02:16:30.313451 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 02:16:30.406635 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 02:16:30.424394 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 02:16:30.426333 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 02:16:30.650704 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 02:16:31.084337 jq[1628]: true Jan 20 02:16:31.160624 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 02:16:31.175145 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 02:16:31.178983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 02:16:31.183410 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 02:16:31.186788 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 02:16:31.198743 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 02:16:31.238285 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 02:16:31.265374 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 02:16:31.273119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 02:16:31.292219 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 02:16:31.354586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 02:16:31.355076 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 02:16:31.389329 update_engine[1626]: I20260120 02:16:31.389146 1626 main.cc:92] Flatcar Update Engine starting Jan 20 02:16:31.739904 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 02:16:31.773511 jq[1639]: true Jan 20 02:16:31.751050 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 02:16:32.729294 tar[1638]: linux-amd64/LICENSE Jan 20 02:16:32.729294 tar[1638]: linux-amd64/helm Jan 20 02:16:32.817134 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 02:16:32.929603 sshd_keygen[1627]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 02:16:32.880471 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 02:16:33.820642 dbus-daemon[1588]: [system] SELinux support is enabled Jan 20 02:16:33.845801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 02:16:34.953445 systemd-logind[1619]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 02:16:34.992644 update_engine[1626]: I20260120 02:16:34.992486 1626 update_check_scheduler.cc:74] Next update check in 5m24s Jan 20 02:16:35.011521 systemd-logind[1619]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 02:16:35.016398 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 02:16:35.016475 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 02:16:35.145404 systemd-logind[1619]: New seat seat0. Jan 20 02:16:35.229709 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 02:16:35.230867 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 02:16:35.263433 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 02:16:35.408051 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 02:16:35.495021 systemd[1]: Started update-engine.service - Update Engine. Jan 20 02:16:35.539963 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 02:16:35.588421 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:34180.service - OpenSSH per-connection server daemon (10.0.0.1:34180). Jan 20 02:16:36.000225 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 02:16:36.123666 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 02:16:36.128434 bash[1684]: Updated "/home/core/.ssh/authorized_keys" Jan 20 02:16:36.134624 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 02:16:36.173289 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 02:16:36.211522 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 02:16:36.214231 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 02:16:37.635333 kernel: kvm_amd: TSC scaling supported Jan 20 02:16:37.640425 kernel: kvm_amd: Nested Virtualization enabled Jan 20 02:16:37.640600 kernel: kvm_amd: Nested Paging enabled Jan 20 02:16:37.649562 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 02:16:37.654205 kernel: kvm_amd: PMU virtualization is disabled Jan 20 02:16:39.535514 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 02:16:39.985881 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 02:16:40.027924 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 02:16:40.049197 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 02:16:41.152710 locksmithd[1691]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 02:16:42.633145 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 34180 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:16:42.695585 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:16:43.497507 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 02:16:43.545144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 02:16:43.971354 systemd-logind[1619]: New session 1 of user core. Jan 20 02:16:44.887304 containerd[1640]: time="2026-01-20T02:16:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 02:16:45.250782 containerd[1640]: time="2026-01-20T02:16:45.115947196Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 02:16:45.654465 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 02:16:45.708399 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 02:16:46.814855 containerd[1640]: time="2026-01-20T02:16:46.814702579Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="307.943µs" Jan 20 02:16:46.818551 containerd[1640]: time="2026-01-20T02:16:46.816183260Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 02:16:46.819040 containerd[1640]: time="2026-01-20T02:16:46.818949645Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 02:16:46.819185 containerd[1640]: time="2026-01-20T02:16:46.819153597Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 02:16:46.820117 containerd[1640]: time="2026-01-20T02:16:46.820086052Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 02:16:46.820554 containerd[1640]: time="2026-01-20T02:16:46.820427675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:16:46.821035 containerd[1640]: time="2026-01-20T02:16:46.820898254Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 02:16:46.821035 containerd[1640]: time="2026-01-20T02:16:46.820925666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:16:46.860360 containerd[1640]: time="2026-01-20T02:16:46.859771765Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 02:16:46.861506 containerd[1640]: time="2026-01-20T02:16:46.860981831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:16:47.099430 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:46.861447711Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.221906991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.241905281Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.242064798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.242846471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.243742147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.243830945Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.243893152Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.244092771Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.246160445Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 02:16:47.340165 containerd[1640]: time="2026-01-20T02:16:47.250477484Z" level=info msg="metadata content store policy set" policy=shared Jan 20 02:16:47.376927 systemd-logind[1619]: New session 2 of user core. Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.454884611Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455061764Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455235484Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455255691Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455288878Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455306103Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455423991Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455443435Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455459798Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455474745Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455487704Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.455501335Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.462829075Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 02:16:48.425758 containerd[1640]: time="2026-01-20T02:16:47.462877771Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463238161Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463397387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463425454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463443512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463459974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463472924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.463488514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.471444934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.475743282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.475775775Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.475791064Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.475880304Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.476077173Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.476131098Z" level=info msg="Start snapshots syncer" Jan 20 02:16:48.468940 containerd[1640]: time="2026-01-20T02:16:47.476211332Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 02:16:48.498916 containerd[1640]: time="2026-01-20T02:16:48.498795357Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 02:16:48.520804 containerd[1640]: time="2026-01-20T02:16:48.520741703Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 02:16:48.526766 containerd[1640]: time="2026-01-20T02:16:48.526715800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 02:16:48.527847 containerd[1640]: time="2026-01-20T02:16:48.527810029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 02:16:48.528272 containerd[1640]: time="2026-01-20T02:16:48.528243762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 02:16:48.537372 containerd[1640]: time="2026-01-20T02:16:48.537315563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 02:16:48.538294 containerd[1640]: time="2026-01-20T02:16:48.538259344Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 02:16:48.562165 containerd[1640]: time="2026-01-20T02:16:48.562073531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 02:16:48.562860 containerd[1640]: time="2026-01-20T02:16:48.562713140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 02:16:48.563065 containerd[1640]: time="2026-01-20T02:16:48.563042904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 02:16:48.563248 containerd[1640]: time="2026-01-20T02:16:48.563224210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 02:16:48.563510 containerd[1640]: time="2026-01-20T02:16:48.563488261Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 02:16:48.571634 containerd[1640]: time="2026-01-20T02:16:48.571511566Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:16:48.571793 containerd[1640]: time="2026-01-20T02:16:48.571765832Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 02:16:48.571888 containerd[1640]: time="2026-01-20T02:16:48.571866350Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:16:48.572196 containerd[1640]: time="2026-01-20T02:16:48.571948242Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 02:16:48.572297 containerd[1640]: time="2026-01-20T02:16:48.572274102Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 02:16:48.572383 containerd[1640]: time="2026-01-20T02:16:48.572363781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 02:16:48.572621 containerd[1640]: time="2026-01-20T02:16:48.572495410Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 02:16:48.572879 containerd[1640]: time="2026-01-20T02:16:48.572858835Z" level=info msg="runtime interface created" Jan 20 02:16:48.572954 containerd[1640]: time="2026-01-20T02:16:48.572936853Z" level=info msg="created NRI interface" Jan 20 02:16:48.575280 containerd[1640]: time="2026-01-20T02:16:48.573050668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 02:16:48.576076 containerd[1640]: time="2026-01-20T02:16:48.576051114Z" level=info msg="Connect containerd service" Jan 20 02:16:48.576435 containerd[1640]: time="2026-01-20T02:16:48.576412641Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 02:16:48.627481 containerd[1640]: time="2026-01-20T02:16:48.622694340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 02:16:50.177591 tar[1638]: linux-amd64/README.md Jan 20 02:16:51.557963 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 02:16:53.532698 systemd[1713]: Queued start job for default target default.target. Jan 20 02:16:53.972200 systemd[1713]: Created slice app.slice - User Application Slice. Jan 20 02:16:53.973630 systemd[1713]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 02:16:53.977315 systemd[1713]: Reached target paths.target - Paths. Jan 20 02:16:53.977414 systemd[1713]: Reached target timers.target - Timers. Jan 20 02:16:53.985667 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 02:16:54.059259 systemd[1713]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.553341905Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.553459594Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.553653023Z" level=info msg="Start subscribing containerd event" Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.553772968Z" level=info msg="Start recovering state" Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554192691Z" level=info msg="Start event monitor" Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554208803Z" level=info msg="Start cni network conf syncer for default" Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554217697Z" level=info msg="Start streaming server" Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554281003Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554293055Z" level=info msg="runtime interface starting up..." Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554316196Z" level=info msg="starting plugins..." Jan 20 02:16:54.554503 containerd[1640]: time="2026-01-20T02:16:54.554362818Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 02:16:54.596085 containerd[1640]: time="2026-01-20T02:16:54.593765990Z" level=info msg="containerd successfully booted in 10.911156s" Jan 20 02:16:54.594355 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 02:16:55.262244 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 02:16:55.491351 systemd[1713]: Reached target sockets.target - Sockets. Jan 20 02:16:55.524582 systemd[1713]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 02:16:55.524873 systemd[1713]: Reached target basic.target - Basic System. Jan 20 02:16:55.525042 systemd[1713]: Reached target default.target - Main User Target. Jan 20 02:16:55.525107 systemd[1713]: Startup finished in 7.101s. Jan 20 02:16:55.526807 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 02:16:56.011500 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 02:16:57.512412 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:58004.service - OpenSSH per-connection server daemon (10.0.0.1:58004). Jan 20 02:17:01.419726 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 58004 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:01.453382 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:01.579258 systemd-logind[1619]: New session 3 of user core. Jan 20 02:17:01.620232 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 02:17:02.602495 sshd[1753]: Connection closed by 10.0.0.1 port 58004 Jan 20 02:17:02.605486 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:02.658931 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:58004.service: Deactivated successfully. Jan 20 02:17:02.671091 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 02:17:02.676685 systemd-logind[1619]: Session 3 logged out. Waiting for processes to exit. Jan 20 02:17:02.701028 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). Jan 20 02:17:02.705467 systemd-logind[1619]: Removed session 3. Jan 20 02:17:02.966489 kernel: EDAC MC: Ver: 3.0.0 Jan 20 02:17:04.985883 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:05.008684 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:05.082076 systemd-logind[1619]: New session 4 of user core. Jan 20 02:17:05.105889 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 02:17:05.317390 sshd[1764]: Connection closed by 10.0.0.1 port 58018 Jan 20 02:17:05.315584 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:05.354911 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:58018.service: Deactivated successfully. Jan 20 02:17:05.379005 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 02:17:05.414914 systemd-logind[1619]: Session 4 logged out. Waiting for processes to exit. Jan 20 02:17:05.446740 systemd-logind[1619]: Removed session 4. Jan 20 02:17:06.157291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:06.158325 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 02:17:06.158679 systemd[1]: Startup finished in 18.149s (kernel) + 48.473s (initrd) + 1min 11.004s (userspace) = 2min 17.627s. Jan 20 02:17:06.199331 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:12.126272 kubelet[1774]: E0120 02:17:12.121913 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:12.167952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:12.172745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:12.181854 systemd[1]: kubelet.service: Consumed 8.107s CPU time, 258.9M memory peak. Jan 20 02:17:15.430169 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:41764.service - OpenSSH per-connection server daemon (10.0.0.1:41764). Jan 20 02:17:15.964134 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 41764 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:15.970884 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:16.020193 systemd-logind[1619]: New session 5 of user core. Jan 20 02:17:16.061870 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 02:17:16.302857 sshd[1788]: Connection closed by 10.0.0.1 port 41764 Jan 20 02:17:16.305454 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:16.355301 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:41764.service: Deactivated successfully. Jan 20 02:17:16.368330 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 02:17:16.387811 systemd-logind[1619]: Session 5 logged out. Waiting for processes to exit. Jan 20 02:17:16.403827 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:41778.service - OpenSSH per-connection server daemon (10.0.0.1:41778). Jan 20 02:17:16.409688 systemd-logind[1619]: Removed session 5. Jan 20 02:17:17.104720 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 41778 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:17.114934 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:17.211042 systemd-logind[1619]: New session 6 of user core. Jan 20 02:17:17.271400 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 02:17:17.405038 sshd[1798]: Connection closed by 10.0.0.1 port 41778 Jan 20 02:17:17.400091 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:17.456240 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:41778.service: Deactivated successfully. Jan 20 02:17:17.480056 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 02:17:17.501879 systemd-logind[1619]: Session 6 logged out. Waiting for processes to exit. Jan 20 02:17:17.553347 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:41794.service - OpenSSH per-connection server daemon (10.0.0.1:41794). Jan 20 02:17:17.570953 systemd-logind[1619]: Removed session 6. Jan 20 02:17:18.463491 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 41794 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:18.469960 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:18.550875 systemd-logind[1619]: New session 7 of user core. Jan 20 02:17:18.621377 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 02:17:18.886849 sshd[1808]: Connection closed by 10.0.0.1 port 41794 Jan 20 02:17:18.892891 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:18.954866 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:41820.service - OpenSSH per-connection server daemon (10.0.0.1:41820). Jan 20 02:17:18.961285 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:41794.service: Deactivated successfully. Jan 20 02:17:18.983771 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 02:17:19.055941 systemd-logind[1619]: Session 7 logged out. Waiting for processes to exit. Jan 20 02:17:19.078046 systemd-logind[1619]: Removed session 7. Jan 20 02:17:19.656203 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 41820 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:19.657660 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:19.691612 systemd-logind[1619]: New session 8 of user core. Jan 20 02:17:19.720433 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 02:17:19.921310 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 02:17:19.924916 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:17:20.041941 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 20 02:17:20.069719 sshd[1819]: Connection closed by 10.0.0.1 port 41820 Jan 20 02:17:20.069093 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:20.139605 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:41820.service: Deactivated successfully. Jan 20 02:17:20.162168 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 02:17:20.214779 systemd-logind[1619]: Session 8 logged out. Waiting for processes to exit. Jan 20 02:17:20.243936 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:41832.service - OpenSSH per-connection server daemon (10.0.0.1:41832). Jan 20 02:17:20.362805 systemd-logind[1619]: Removed session 8. Jan 20 02:17:20.546580 update_engine[1626]: I20260120 02:17:20.537488 1626 update_attempter.cc:509] Updating boot flags... Jan 20 02:17:20.813977 sshd[1827]: Accepted publickey for core from 10.0.0.1 port 41832 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:20.825042 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:20.947905 systemd-logind[1619]: New session 9 of user core. Jan 20 02:17:20.966589 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 02:17:21.453136 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 02:17:21.465381 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:17:21.520602 sudo[1847]: pam_unix(sudo:session): session closed for user root Jan 20 02:17:21.703822 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 02:17:21.704739 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:17:21.916907 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 02:17:23.279680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 02:17:23.321784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:23.874000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 02:17:23.888939 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 20 02:17:23.889143 kernel: audit: type=1305 audit(1768875443.874:214): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 02:17:23.874000 audit[1876]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1f2c3470 a2=420 a3=0 items=0 ppid=1854 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:23.874000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:17:23.995188 augenrules[1876]: No rules Jan 20 02:17:24.001044 kernel: audit: type=1300 audit(1768875443.874:214): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1f2c3470 a2=420 a3=0 items=0 ppid=1854 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:24.001179 kernel: audit: type=1327 audit(1768875443.874:214): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 02:17:24.011005 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 02:17:24.014106 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 02:17:24.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.040780 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 20 02:17:24.089700 kernel: audit: type=1130 audit(1768875444.016:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.151858 sshd[1845]: Connection closed by 10.0.0.1 port 41832 Jan 20 02:17:24.037000 audit[1846]: USER_END pid=1846 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.188885 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 20 02:17:24.222803 kernel: audit: type=1131 audit(1768875444.016:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.222916 kernel: audit: type=1106 audit(1768875444.037:217): pid=1846 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.222988 kernel: audit: type=1104 audit(1768875444.037:218): pid=1846 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.037000 audit[1846]: CRED_DISP pid=1846 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:24.288494 kernel: audit: type=1106 audit(1768875444.220:219): pid=1827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:24.220000 audit[1827]: USER_END pid=1827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:24.385424 kernel: audit: type=1104 audit(1768875444.220:220): pid=1827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:24.220000 audit[1827]: CRED_DISP pid=1827 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:24.991093 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:41832.service: Deactivated successfully. Jan 20 02:17:25.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.97:22-10.0.0.1:41832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:25.056786 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 02:17:25.077652 systemd-logind[1619]: Session 9 logged out. Waiting for processes to exit. Jan 20 02:17:25.106633 kernel: audit: type=1131 audit(1768875445.009:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.97:22-10.0.0.1:41832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:25.124977 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:41850.service - OpenSSH per-connection server daemon (10.0.0.1:41850). Jan 20 02:17:25.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.97:22-10.0.0.1:41850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:25.153015 systemd-logind[1619]: Removed session 9. Jan 20 02:17:26.270781 sshd[1885]: Accepted publickey for core from 10.0.0.1 port 41850 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:17:26.268000 audit[1885]: USER_ACCT pid=1885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:26.272000 audit[1885]: CRED_ACQ pid=1885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:26.272000 audit[1885]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4c45b7e0 a2=3 a3=0 items=0 ppid=1 pid=1885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:26.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:17:26.279038 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:17:26.327434 systemd-logind[1619]: New session 10 of user core. Jan 20 02:17:26.450224 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:17:26.473000 audit[1885]: USER_START pid=1885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:26.485000 audit[1889]: CRED_ACQ pid=1889 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:17:26.606920 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 02:17:26.602000 audit[1890]: USER_ACCT pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:26.604000 audit[1890]: CRED_REFR pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:26.620452 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 02:17:26.622000 audit[1890]: USER_START pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:17:27.949085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:27.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:27.993871 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:28.682273 kubelet[1905]: E0120 02:17:28.674792 1905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:28.743361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:28.743754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:28.759513 systemd[1]: kubelet.service: Consumed 1.280s CPU time, 110.9M memory peak. Jan 20 02:17:28.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:17:31.190148 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 02:17:31.268153 (dockerd)[1927]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 02:17:39.095044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 02:17:39.149858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:42.616419 dockerd[1927]: time="2026-01-20T02:17:42.602414889Z" level=info msg="Starting up" Jan 20 02:17:42.739905 dockerd[1927]: time="2026-01-20T02:17:42.737313043Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 02:17:43.802315 dockerd[1927]: time="2026-01-20T02:17:43.794989949Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 02:17:44.277909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:44.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:44.327675 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 20 02:17:44.327829 kernel: audit: type=1130 audit(1768875464.277:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:44.355843 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:17:44.917579 dockerd[1927]: time="2026-01-20T02:17:44.917127247Z" level=info msg="Loading containers: start." Jan 20 02:17:45.126434 kernel: Initializing XFRM netlink socket Jan 20 02:17:45.155830 kubelet[1960]: E0120 02:17:45.154949 1960 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:17:45.186213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:17:45.193863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:17:45.351057 kernel: audit: type=1131 audit(1768875465.287:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:17:45.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:17:45.287908 systemd[1]: kubelet.service: Consumed 1.109s CPU time, 110.5M memory peak. Jan 20 02:17:49.869000 audit[1998]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:49.869000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff9d113dd0 a2=0 a3=0 items=0 ppid=1927 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.022509 kernel: audit: type=1325 audit(1768875469.869:235): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.022721 kernel: audit: type=1300 audit(1768875469.869:235): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff9d113dd0 a2=0 a3=0 items=0 ppid=1927 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.022765 kernel: audit: type=1327 audit(1768875469.869:235): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 02:17:49.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 02:17:50.072939 kernel: audit: type=1325 audit(1768875469.955:236): table=filter:3 family=2 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:49.955000 audit[2000]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:49.955000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffed909300 a2=0 a3=0 items=0 ppid=1927 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:49.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 02:17:50.274111 kernel: audit: type=1300 audit(1768875469.955:236): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffed909300 a2=0 a3=0 items=0 ppid=1927 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.274348 kernel: audit: type=1327 audit(1768875469.955:236): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 02:17:49.993000 audit[2002]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:49.993000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef28d0ab0 a2=0 a3=0 items=0 ppid=1927 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.324248 kernel: audit: type=1325 audit(1768875469.993:237): table=filter:4 family=2 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.324361 kernel: audit: type=1300 audit(1768875469.993:237): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef28d0ab0 a2=0 a3=0 items=0 ppid=1927 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.357367 kernel: audit: type=1327 audit(1768875469.993:237): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 02:17:49.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 02:17:50.048000 audit[2004]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.418333 kernel: audit: type=1325 audit(1768875470.048:238): table=filter:5 family=2 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.048000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe151343a0 a2=0 a3=0 items=0 ppid=1927 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 02:17:50.094000 audit[2006]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.094000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeae638310 a2=0 a3=0 items=0 ppid=1927 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 02:17:50.175000 audit[2008]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.175000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc23ad2d10 a2=0 a3=0 items=0 ppid=1927 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:17:50.233000 audit[2010]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.233000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdebdeaff0 a2=0 a3=0 items=0 ppid=1927 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 02:17:50.278000 audit[2012]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.278000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd39ca8e90 a2=0 a3=0 items=0 ppid=1927 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 02:17:50.743000 audit[2016]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.743000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe32f52b40 a2=0 a3=0 items=0 ppid=1927 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 02:17:50.775000 audit[2018]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.775000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffddd445f20 a2=0 a3=0 items=0 ppid=1927 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 02:17:50.791000 audit[2020]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.791000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe6e5e15a0 a2=0 a3=0 items=0 ppid=1927 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 02:17:50.839000 audit[2022]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.839000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffec0f8c130 a2=0 a3=0 items=0 ppid=1927 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:17:50.894000 audit[2024]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:50.894000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff28119d60 a2=0 a3=0 items=0 ppid=1927 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:50.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 02:17:51.447000 audit[2054]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.447000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff08682420 a2=0 a3=0 items=0 ppid=1927 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.447000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 02:17:51.479000 audit[2056]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.479000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff115f5b70 a2=0 a3=0 items=0 ppid=1927 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.479000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 02:17:51.513000 audit[2058]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.513000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc30e77810 a2=0 a3=0 items=0 ppid=1927 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 02:17:51.525000 audit[2060]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.525000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcfa030f0 a2=0 a3=0 items=0 ppid=1927 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.525000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 02:17:51.546000 audit[2062]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.546000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff4ecb0430 a2=0 a3=0 items=0 ppid=1927 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.546000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 02:17:51.562000 audit[2064]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.562000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcdf71c030 a2=0 a3=0 items=0 ppid=1927 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:17:51.586000 audit[2066]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.586000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7c73bde0 a2=0 a3=0 items=0 ppid=1927 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 02:17:51.638000 audit[2068]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.638000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc769aee70 a2=0 a3=0 items=0 ppid=1927 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 02:17:51.700000 audit[2070]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.700000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffea14159b0 a2=0 a3=0 items=0 ppid=1927 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 02:17:51.727000 audit[2072]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.727000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd5988ffb0 a2=0 a3=0 items=0 ppid=1927 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.727000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 02:17:51.794000 audit[2074]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.794000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcebc68310 a2=0 a3=0 items=0 ppid=1927 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.794000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 02:17:51.857000 audit[2076]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.857000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff37c33950 a2=0 a3=0 items=0 ppid=1927 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 02:17:51.887000 audit[2078]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:51.887000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff67a91e90 a2=0 a3=0 items=0 ppid=1927 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:51.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 02:17:52.006000 audit[2083]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.006000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff609053b0 a2=0 a3=0 items=0 ppid=1927 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 02:17:52.054000 audit[2085]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.054000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe623f1500 a2=0 a3=0 items=0 ppid=1927 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 02:17:52.088000 audit[2087]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.088000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe12f50d90 a2=0 a3=0 items=0 ppid=1927 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 02:17:52.123000 audit[2089]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:52.123000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebc916200 a2=0 a3=0 items=0 ppid=1927 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.123000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 02:17:52.151000 audit[2091]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:52.151000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed2b2e090 a2=0 a3=0 items=0 ppid=1927 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 02:17:52.185000 audit[2093]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:17:52.185000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffec6666d40 a2=0 a3=0 items=0 ppid=1927 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 02:17:52.494000 audit[2098]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.494000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd9c123b10 a2=0 a3=0 items=0 ppid=1927 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 02:17:52.494000 audit[2100]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.494000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5c9a3a70 a2=0 a3=0 items=0 ppid=1927 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 02:17:52.575000 audit[2108]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.575000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe84cf9610 a2=0 a3=0 items=0 ppid=1927 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 02:17:52.739000 audit[2114]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.739000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe217334f0 a2=0 a3=0 items=0 ppid=1927 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 02:17:52.769000 audit[2116]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.769000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffda5463f20 a2=0 a3=0 items=0 ppid=1927 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 02:17:52.801000 audit[2118]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.801000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe4089b880 a2=0 a3=0 items=0 ppid=1927 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 02:17:52.817000 audit[2120]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.817000 audit[2120]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc782f2f60 a2=0 a3=0 items=0 ppid=1927 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 02:17:52.844000 audit[2122]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:17:52.844000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffceb3fd5f0 a2=0 a3=0 items=0 ppid=1927 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:17:52.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 02:17:52.860464 systemd-networkd[1538]: docker0: Link UP Jan 20 02:17:52.903305 dockerd[1927]: time="2026-01-20T02:17:52.899197485Z" level=info msg="Loading containers: done." Jan 20 02:17:53.104782 dockerd[1927]: time="2026-01-20T02:17:53.101110932Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 02:17:53.104782 dockerd[1927]: time="2026-01-20T02:17:53.101326729Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 02:17:53.104782 dockerd[1927]: time="2026-01-20T02:17:53.101710626Z" level=info msg="Initializing buildkit" Jan 20 02:17:53.694596 dockerd[1927]: time="2026-01-20T02:17:53.693325621Z" level=info msg="Completed buildkit initialization" Jan 20 02:17:53.748965 dockerd[1927]: time="2026-01-20T02:17:53.748862640Z" level=info msg="Daemon has completed initialization" Jan 20 02:17:53.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:53.754115 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 02:17:53.762916 dockerd[1927]: time="2026-01-20T02:17:53.761817021Z" level=info msg="API listen on /run/docker.sock" Jan 20 02:17:56.023923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 02:17:56.061316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:17:59.951222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:17:59.973443 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 20 02:17:59.975928 kernel: audit: type=1130 audit(1768875479.962:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:17:59.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:00.100064 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:01.566314 kubelet[2169]: E0120 02:18:01.563250 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:01.602021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:01.602458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:01.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:01.611633 systemd[1]: kubelet.service: Consumed 1.918s CPU time, 110.6M memory peak. Jan 20 02:18:01.637755 kernel: audit: type=1131 audit(1768875481.609:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:04.158343 containerd[1640]: time="2026-01-20T02:18:04.155427372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 02:18:06.771756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203038580.mount: Deactivated successfully. Jan 20 02:18:11.876691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 02:18:11.918246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:15.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:15.022808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:15.089013 kernel: audit: type=1130 audit(1768875495.021:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:15.499221 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:17.203276 kubelet[2246]: E0120 02:18:17.202489 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:17.229380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:17.230664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:17.234515 systemd[1]: kubelet.service: Consumed 1.307s CPU time, 109.3M memory peak. Jan 20 02:18:17.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:17.252653 kernel: audit: type=1131 audit(1768875497.233:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:27.317025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 02:18:27.373194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:31.480141 containerd[1640]: time="2026-01-20T02:18:31.474864461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:31.709231 containerd[1640]: time="2026-01-20T02:18:31.529472364Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27057169" Jan 20 02:18:31.752011 containerd[1640]: time="2026-01-20T02:18:31.743374520Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:31.846316 containerd[1640]: time="2026-01-20T02:18:31.839207517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:31.894936 containerd[1640]: time="2026-01-20T02:18:31.886073375Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 27.730485281s" Jan 20 02:18:31.894936 containerd[1640]: time="2026-01-20T02:18:31.886315363Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 02:18:32.495739 containerd[1640]: time="2026-01-20T02:18:32.495637791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 02:18:34.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:34.228297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:34.304794 kernel: audit: type=1130 audit(1768875514.225:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:34.364870 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:35.870264 kubelet[2264]: E0120 02:18:35.849222 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:35.912784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:35.921899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:35.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:36.024393 systemd[1]: kubelet.service: Consumed 1.462s CPU time, 110.1M memory peak. Jan 20 02:18:36.059646 kernel: audit: type=1131 audit(1768875515.983:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:46.076257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 02:18:46.422575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:18:46.736375 containerd[1640]: time="2026-01-20T02:18:46.731388428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:46.785612 containerd[1640]: time="2026-01-20T02:18:46.783088161Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 20 02:18:46.813102 containerd[1640]: time="2026-01-20T02:18:46.812268963Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:46.935829 containerd[1640]: time="2026-01-20T02:18:46.935393578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:46.963347 containerd[1640]: time="2026-01-20T02:18:46.959938749Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 14.462555358s" Jan 20 02:18:46.963347 containerd[1640]: time="2026-01-20T02:18:46.960002372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 02:18:46.991258 containerd[1640]: time="2026-01-20T02:18:46.991080349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 02:18:50.724944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:18:50.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:50.796757 kernel: audit: type=1130 audit(1768875530.723:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:18:50.866262 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:18:52.624723 kubelet[2285]: E0120 02:18:52.620156 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:18:52.672095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:18:52.672650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:18:52.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:52.728084 systemd[1]: kubelet.service: Consumed 1.581s CPU time, 108.9M memory peak. Jan 20 02:18:52.783760 kernel: audit: type=1131 audit(1768875532.678:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:18:58.569244 containerd[1640]: time="2026-01-20T02:18:58.561170464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:58.590129 containerd[1640]: time="2026-01-20T02:18:58.573087154Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15719155" Jan 20 02:18:58.590129 containerd[1640]: time="2026-01-20T02:18:58.583402380Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:58.594368 containerd[1640]: time="2026-01-20T02:18:58.594271569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:18:58.600439 containerd[1640]: time="2026-01-20T02:18:58.600302452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 11.608694988s" Jan 20 02:18:58.600439 containerd[1640]: time="2026-01-20T02:18:58.600377695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 02:18:58.897038 containerd[1640]: time="2026-01-20T02:18:58.875254807Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 02:19:02.948571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 02:19:03.314040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:19:06.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:06.548079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:19:06.618699 kernel: audit: type=1130 audit(1768875546.547:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:06.697366 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:19:08.218850 kubelet[2310]: E0120 02:19:08.207310 2310 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:19:08.272328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:19:08.280795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:19:08.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:08.345344 systemd[1]: kubelet.service: Consumed 948ms CPU time, 110.1M memory peak. Jan 20 02:19:08.381173 kernel: audit: type=1131 audit(1768875548.294:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:09.959393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773334656.mount: Deactivated successfully. Jan 20 02:19:12.864626 containerd[1640]: time="2026-01-20T02:19:12.863473367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:12.885233 containerd[1640]: time="2026-01-20T02:19:12.874667779Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25961571" Jan 20 02:19:12.889181 containerd[1640]: time="2026-01-20T02:19:12.889092503Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:12.899633 containerd[1640]: time="2026-01-20T02:19:12.899235862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:12.905098 containerd[1640]: time="2026-01-20T02:19:12.901735631Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 14.014572381s" Jan 20 02:19:12.905098 containerd[1640]: time="2026-01-20T02:19:12.901774584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 02:19:12.905480 containerd[1640]: time="2026-01-20T02:19:12.905445217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 02:19:14.859092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79879675.mount: Deactivated successfully. Jan 20 02:19:18.286084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 02:19:18.302962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:19:19.176826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:19:19.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:19.233649 kernel: audit: type=1130 audit(1768875559.183:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:19.261201 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:19:19.854449 kubelet[2385]: E0120 02:19:19.851892 2385 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:19:19.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:19.865850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:19:19.866755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:19:19.867395 systemd[1]: kubelet.service: Consumed 439ms CPU time, 109M memory peak. Jan 20 02:19:19.916387 kernel: audit: type=1131 audit(1768875559.864:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:22.906329 containerd[1640]: time="2026-01-20T02:19:22.903161182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:22.915257 containerd[1640]: time="2026-01-20T02:19:22.914772315Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22380234" Jan 20 02:19:22.922852 containerd[1640]: time="2026-01-20T02:19:22.921310895Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:22.951123 containerd[1640]: time="2026-01-20T02:19:22.949636767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:22.955259 containerd[1640]: time="2026-01-20T02:19:22.954406121Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 10.048733075s" Jan 20 02:19:22.955259 containerd[1640]: time="2026-01-20T02:19:22.954501200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 02:19:22.977450 containerd[1640]: time="2026-01-20T02:19:22.969361731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 02:19:24.393801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233916804.mount: Deactivated successfully. Jan 20 02:19:24.471191 containerd[1640]: time="2026-01-20T02:19:24.470906595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:24.476359 containerd[1640]: time="2026-01-20T02:19:24.476212967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 20 02:19:24.484985 containerd[1640]: time="2026-01-20T02:19:24.484671979Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:24.504024 containerd[1640]: time="2026-01-20T02:19:24.503904015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:19:24.505476 containerd[1640]: time="2026-01-20T02:19:24.505373952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.535928605s" Jan 20 02:19:24.505476 containerd[1640]: time="2026-01-20T02:19:24.505447451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 02:19:24.509429 containerd[1640]: time="2026-01-20T02:19:24.509315711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 02:19:28.170922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682386354.mount: Deactivated successfully. Jan 20 02:19:30.331359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 02:19:31.203961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:19:38.906491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:19:38.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:38.969652 kernel: audit: type=1130 audit(1768875578.905:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:39.009839 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:19:40.421291 kubelet[2418]: E0120 02:19:40.421106 2418 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:19:40.455873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:19:40.464597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:19:40.475340 systemd[1]: kubelet.service: Consumed 1.587s CPU time, 110.4M memory peak. Jan 20 02:19:40.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:40.543804 kernel: audit: type=1131 audit(1768875580.473:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:50.563874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 02:19:50.592096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:19:56.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:56.267083 kernel: audit: type=1130 audit(1768875596.204:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:19:56.204940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:19:56.418414 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:19:58.492919 kubelet[2475]: E0120 02:19:58.466032 2475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:19:58.490724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:19:58.525914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:19:58.545796 systemd[1]: kubelet.service: Consumed 2.257s CPU time, 110.2M memory peak. Jan 20 02:19:58.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:19:58.603882 kernel: audit: type=1131 audit(1768875598.545:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:08.595202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 02:20:08.664039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:20:13.987701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:20:13.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:14.067733 kernel: audit: type=1130 audit(1768875613.991:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:14.257996 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:20:15.760782 kubelet[2491]: E0120 02:20:15.757013 2491 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:20:15.835403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:20:15.848078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:20:15.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:15.905500 systemd[1]: kubelet.service: Consumed 1.554s CPU time, 110M memory peak. Jan 20 02:20:15.972048 kernel: audit: type=1131 audit(1768875615.892:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:18.612142 containerd[1640]: time="2026-01-20T02:20:18.606772365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:20:18.612142 containerd[1640]: time="2026-01-20T02:20:18.608915379Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74156876" Jan 20 02:20:18.612142 containerd[1640]: time="2026-01-20T02:20:18.618656913Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:20:18.655351 containerd[1640]: time="2026-01-20T02:20:18.650158408Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 54.14076455s" Jan 20 02:20:18.655351 containerd[1640]: time="2026-01-20T02:20:18.650211869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 02:20:18.655351 containerd[1640]: time="2026-01-20T02:20:18.651345976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:20:26.089653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 02:20:26.131140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:20:28.630384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:20:28.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:28.719733 kernel: audit: type=1130 audit(1768875628.629:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:28.791989 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:20:29.629663 kubelet[2533]: E0120 02:20:29.629272 2533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:20:29.674410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:20:29.674982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:20:29.676453 systemd[1]: kubelet.service: Consumed 779ms CPU time, 109.5M memory peak. Jan 20 02:20:29.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:29.753846 kernel: audit: type=1131 audit(1768875629.672:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:39.813108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 02:20:39.857680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:20:41.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:41.654353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:20:41.707465 kernel: audit: type=1130 audit(1768875641.653:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:41.717140 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 02:20:42.410562 kubelet[2549]: E0120 02:20:42.409074 2549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 02:20:42.423319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 02:20:42.423818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 02:20:42.424846 systemd[1]: kubelet.service: Consumed 719ms CPU time, 110.5M memory peak. Jan 20 02:20:42.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:42.456408 kernel: audit: type=1131 audit(1768875642.422:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:47.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:47.284247 kernel: audit: type=1130 audit(1768875647.235:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:47.232020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:20:47.240233 systemd[1]: kubelet.service: Consumed 719ms CPU time, 110.5M memory peak. Jan 20 02:20:47.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:47.288976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:20:47.333772 kernel: audit: type=1131 audit(1768875647.235:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:47.583163 systemd[1]: Reload requested from client PID 2564 ('systemctl') (unit session-10.scope)... Jan 20 02:20:47.588753 systemd[1]: Reloading... Jan 20 02:20:48.386470 zram_generator::config[2610]: No configuration found. Jan 20 02:20:50.804788 systemd[1]: Reloading finished in 3215 ms. Jan 20 02:20:50.999000 audit: BPF prog-id=61 op=LOAD Jan 20 02:20:50.999000 audit: BPF prog-id=41 op=UNLOAD Jan 20 02:20:51.048702 kernel: audit: type=1334 audit(1768875650.999:300): prog-id=61 op=LOAD Jan 20 02:20:51.048839 kernel: audit: type=1334 audit(1768875650.999:301): prog-id=41 op=UNLOAD Jan 20 02:20:50.999000 audit: BPF prog-id=62 op=LOAD Jan 20 02:20:51.082845 kernel: audit: type=1334 audit(1768875650.999:302): prog-id=62 op=LOAD Jan 20 02:20:51.009000 audit: BPF prog-id=63 op=LOAD Jan 20 02:20:51.098776 kernel: audit: type=1334 audit(1768875651.009:303): prog-id=63 op=LOAD Jan 20 02:20:51.009000 audit: BPF prog-id=42 op=UNLOAD Jan 20 02:20:51.110125 kernel: audit: type=1334 audit(1768875651.009:304): prog-id=42 op=UNLOAD Jan 20 02:20:51.009000 audit: BPF prog-id=43 op=UNLOAD Jan 20 02:20:51.120913 kernel: audit: type=1334 audit(1768875651.009:305): prog-id=43 op=UNLOAD Jan 20 02:20:51.031000 audit: BPF prog-id=64 op=LOAD Jan 20 02:20:51.160513 kernel: audit: type=1334 audit(1768875651.031:306): prog-id=64 op=LOAD Jan 20 02:20:51.160706 kernel: audit: type=1334 audit(1768875651.031:307): prog-id=56 op=UNLOAD Jan 20 02:20:51.031000 audit: BPF prog-id=56 op=UNLOAD Jan 20 02:20:51.042000 audit: BPF prog-id=65 op=LOAD Jan 20 02:20:51.042000 audit: BPF prog-id=51 op=UNLOAD Jan 20 02:20:51.042000 audit: BPF prog-id=66 op=LOAD Jan 20 02:20:51.042000 audit: BPF prog-id=67 op=LOAD Jan 20 02:20:51.042000 audit: BPF prog-id=52 op=UNLOAD Jan 20 02:20:51.042000 audit: BPF prog-id=53 op=UNLOAD Jan 20 02:20:51.042000 audit: BPF prog-id=68 op=LOAD Jan 20 02:20:51.042000 audit: BPF prog-id=69 op=LOAD Jan 20 02:20:51.042000 audit: BPF prog-id=54 op=UNLOAD Jan 20 02:20:51.042000 audit: BPF prog-id=55 op=UNLOAD Jan 20 02:20:51.083000 audit: BPF prog-id=70 op=LOAD Jan 20 02:20:51.083000 audit: BPF prog-id=58 op=UNLOAD Jan 20 02:20:51.083000 audit: BPF prog-id=71 op=LOAD Jan 20 02:20:51.083000 audit: BPF prog-id=72 op=LOAD Jan 20 02:20:51.083000 audit: BPF prog-id=59 op=UNLOAD Jan 20 02:20:51.083000 audit: BPF prog-id=60 op=UNLOAD Jan 20 02:20:51.129000 audit: BPF prog-id=73 op=LOAD Jan 20 02:20:51.129000 audit: BPF prog-id=44 op=UNLOAD Jan 20 02:20:51.129000 audit: BPF prog-id=74 op=LOAD Jan 20 02:20:51.129000 audit: BPF prog-id=75 op=LOAD Jan 20 02:20:51.129000 audit: BPF prog-id=45 op=UNLOAD Jan 20 02:20:51.129000 audit: BPF prog-id=46 op=UNLOAD Jan 20 02:20:51.158000 audit: BPF prog-id=76 op=LOAD Jan 20 02:20:51.158000 audit: BPF prog-id=57 op=UNLOAD Jan 20 02:20:51.173000 audit: BPF prog-id=77 op=LOAD Jan 20 02:20:51.173000 audit: BPF prog-id=47 op=UNLOAD Jan 20 02:20:51.173000 audit: BPF prog-id=78 op=LOAD Jan 20 02:20:51.173000 audit: BPF prog-id=79 op=LOAD Jan 20 02:20:51.173000 audit: BPF prog-id=48 op=UNLOAD Jan 20 02:20:51.173000 audit: BPF prog-id=49 op=UNLOAD Jan 20 02:20:51.200000 audit: BPF prog-id=80 op=LOAD Jan 20 02:20:51.200000 audit: BPF prog-id=50 op=UNLOAD Jan 20 02:20:51.457931 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 02:20:51.458088 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 02:20:51.469353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:20:51.469500 systemd[1]: kubelet.service: Consumed 391ms CPU time, 98.6M memory peak. Jan 20 02:20:51.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 02:20:51.513234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:20:54.887147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:20:54.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:54.917834 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 20 02:20:54.918329 kernel: audit: type=1130 audit(1768875654.884:341): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:20:54.995682 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:20:55.504744 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:20:55.504744 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:20:55.504744 kubelet[2659]: I0120 02:20:55.504296 2659 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:20:56.008926 kubelet[2659]: I0120 02:20:56.003386 2659 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 02:20:56.008926 kubelet[2659]: I0120 02:20:56.003451 2659 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:20:56.008926 kubelet[2659]: I0120 02:20:56.003583 2659 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 02:20:56.008926 kubelet[2659]: I0120 02:20:56.003602 2659 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:20:56.008926 kubelet[2659]: I0120 02:20:56.003928 2659 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:20:56.160342 kubelet[2659]: E0120 02:20:56.159114 2659 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:20:56.171314 kubelet[2659]: I0120 02:20:56.168755 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:20:56.199246 kubelet[2659]: I0120 02:20:56.198419 2659 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:20:56.256444 kubelet[2659]: I0120 02:20:56.254770 2659 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 02:20:56.256444 kubelet[2659]: I0120 02:20:56.255297 2659 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:20:56.256444 kubelet[2659]: I0120 02:20:56.255332 2659 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:20:56.256444 kubelet[2659]: I0120 02:20:56.255590 2659 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:20:56.257036 kubelet[2659]: I0120 02:20:56.255609 2659 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 02:20:56.257036 kubelet[2659]: I0120 02:20:56.255804 2659 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 02:20:56.279954 kubelet[2659]: I0120 02:20:56.277674 2659 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:20:56.286081 kubelet[2659]: I0120 02:20:56.285041 2659 kubelet.go:475] "Attempting to sync node with API server" Jan 20 02:20:56.292597 kubelet[2659]: I0120 02:20:56.286205 2659 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:20:56.292597 kubelet[2659]: I0120 02:20:56.286270 2659 kubelet.go:387] "Adding apiserver pod source" Jan 20 02:20:56.292597 kubelet[2659]: I0120 02:20:56.286300 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:20:56.295037 kubelet[2659]: E0120 02:20:56.293125 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:20:56.295037 kubelet[2659]: E0120 02:20:56.293303 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:20:56.313035 kubelet[2659]: I0120 02:20:56.311994 2659 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:20:56.314630 kubelet[2659]: I0120 02:20:56.314190 2659 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:20:56.314630 kubelet[2659]: I0120 02:20:56.314268 2659 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 02:20:56.314630 kubelet[2659]: W0120 02:20:56.314339 2659 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 02:20:56.348840 kubelet[2659]: I0120 02:20:56.344940 2659 server.go:1262] "Started kubelet" Jan 20 02:20:56.348840 kubelet[2659]: I0120 02:20:56.345368 2659 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:20:56.354636 kubelet[2659]: I0120 02:20:56.354119 2659 server.go:310] "Adding debug handlers to kubelet server" Jan 20 02:20:56.365469 kubelet[2659]: I0120 02:20:56.364068 2659 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:20:56.365469 kubelet[2659]: I0120 02:20:56.364125 2659 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 02:20:56.365469 kubelet[2659]: I0120 02:20:56.364468 2659 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:20:56.375601 kubelet[2659]: I0120 02:20:56.371226 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:20:56.375601 kubelet[2659]: I0120 02:20:56.373587 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:20:56.385836 kubelet[2659]: I0120 02:20:56.383587 2659 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 02:20:56.385836 kubelet[2659]: I0120 02:20:56.383773 2659 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 02:20:56.385836 kubelet[2659]: I0120 02:20:56.383839 2659 reconciler.go:29] "Reconciler: start to sync state" Jan 20 02:20:56.385836 kubelet[2659]: E0120 02:20:56.384337 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:20:56.412714 kubelet[2659]: E0120 02:20:56.384416 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:20:56.412714 kubelet[2659]: E0120 02:20:56.408606 2659 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:20:56.412714 kubelet[2659]: E0120 02:20:56.410036 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jan 20 02:20:56.428870 kubelet[2659]: I0120 02:20:56.424319 2659 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:20:56.428870 kubelet[2659]: I0120 02:20:56.427938 2659 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:20:56.447000 audit[2676]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.447000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe102fec90 a2=0 a3=0 items=0 ppid=2659 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.463880 kubelet[2659]: E0120 02:20:56.443714 2659 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4f099c452353 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:20:56.344838995 +0000 UTC m=+1.298493841,LastTimestamp:2026-01-20 02:20:56.344838995 +0000 UTC m=+1.298493841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:20:56.468009 kubelet[2659]: I0120 02:20:56.467959 2659 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:20:56.519360 kubelet[2659]: E0120 02:20:56.518227 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:20:56.550855 kernel: audit: type=1325 audit(1768875656.447:342): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.551008 kernel: audit: type=1300 audit(1768875656.447:342): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe102fec90 a2=0 a3=0 items=0 ppid=2659 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.551054 kernel: audit: type=1327 audit(1768875656.447:342): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 02:20:56.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 02:20:56.578601 kernel: audit: type=1325 audit(1768875656.466:343): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.466000 audit[2677]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.623246 kernel: audit: type=1300 audit(1768875656.466:343): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdcd4eef0 a2=0 a3=0 items=0 ppid=2659 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.466000 audit[2677]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdcd4eef0 a2=0 a3=0 items=0 ppid=2659 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.623643 kubelet[2659]: E0120 02:20:56.613292 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jan 20 02:20:56.623643 kubelet[2659]: E0120 02:20:56.618815 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:20:56.640010 kubelet[2659]: I0120 02:20:56.639946 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:20:56.640272 kubelet[2659]: I0120 02:20:56.640187 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:20:56.640387 kubelet[2659]: I0120 02:20:56.640219 2659 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:20:56.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 02:20:56.672287 kubelet[2659]: I0120 02:20:56.672188 2659 policy_none.go:49] "None policy: Start" Jan 20 02:20:56.672287 kubelet[2659]: I0120 02:20:56.672272 2659 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 02:20:56.672287 kubelet[2659]: I0120 02:20:56.672293 2659 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 02:20:56.682927 kubelet[2659]: I0120 02:20:56.682783 2659 policy_none.go:47] "Start" Jan 20 02:20:56.684435 kernel: audit: type=1327 audit(1768875656.466:343): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 02:20:56.494000 audit[2682]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.494000 audit[2682]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc6145c390 a2=0 a3=0 items=0 ppid=2659 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.721850 kubelet[2659]: E0120 02:20:56.720920 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:20:56.732894 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 02:20:56.762862 kernel: audit: type=1325 audit(1768875656.494:344): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.763000 kernel: audit: type=1300 audit(1768875656.494:344): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc6145c390 a2=0 a3=0 items=0 ppid=2659 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.763079 kernel: audit: type=1327 audit(1768875656.494:344): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:20:56.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:20:56.545000 audit[2684]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.545000 audit[2684]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe963818d0 a2=0 a3=0 items=0 ppid=2659 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:20:56.799000 audit[2689]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.799000 audit[2689]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc7a9f17f0 a2=0 a3=0 items=0 ppid=2659 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 20 02:20:56.800000 audit[2693]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:20:56.800000 audit[2693]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe7b1b1b10 a2=0 a3=0 items=0 ppid=2659 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 02:20:56.806915 kubelet[2659]: I0120 02:20:56.801975 2659 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 02:20:56.820012 kubelet[2659]: I0120 02:20:56.819974 2659 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 02:20:56.820012 kubelet[2659]: I0120 02:20:56.820008 2659 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 02:20:56.820209 kubelet[2659]: I0120 02:20:56.820086 2659 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 02:20:56.820209 kubelet[2659]: E0120 02:20:56.820155 2659 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:20:56.820000 audit[2692]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.820000 audit[2692]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe27d92640 a2=0 a3=0 items=0 ppid=2659 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 02:20:56.824464 kubelet[2659]: E0120 02:20:56.822610 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:20:56.825090 kubelet[2659]: E0120 02:20:56.825064 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:20:56.834000 audit[2695]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.834000 audit[2695]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf9d71c10 a2=0 a3=0 items=0 ppid=2659 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 02:20:56.842000 audit[2694]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:20:56.842000 audit[2694]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdc11b9d0 a2=0 a3=0 items=0 ppid=2659 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.848000 audit[2696]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:20:56.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 02:20:56.848000 audit[2696]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd794bb170 a2=0 a3=0 items=0 ppid=2659 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 02:20:56.868000 audit[2697]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:20:56.868000 audit[2697]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd39314ab0 a2=0 a3=0 items=0 ppid=2659 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 02:20:56.871501 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 02:20:56.891000 audit[2698]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:20:56.891000 audit[2698]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9c71c580 a2=0 a3=0 items=0 ppid=2659 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:56.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 02:20:56.899920 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 02:20:56.927864 kubelet[2659]: E0120 02:20:56.923229 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:20:56.927864 kubelet[2659]: E0120 02:20:56.923601 2659 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:20:56.927864 kubelet[2659]: E0120 02:20:56.925381 2659 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:20:56.930005 kubelet[2659]: I0120 02:20:56.928387 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:20:56.930005 kubelet[2659]: I0120 02:20:56.928417 2659 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:20:56.930005 kubelet[2659]: I0120 02:20:56.929043 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:20:56.939500 kubelet[2659]: E0120 02:20:56.937230 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:20:56.939500 kubelet[2659]: E0120 02:20:56.937575 2659 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:20:57.020210 kubelet[2659]: E0120 02:20:57.020002 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jan 20 02:20:57.043051 kubelet[2659]: I0120 02:20:57.042123 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:20:57.047117 kubelet[2659]: E0120 02:20:57.046235 2659 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 20 02:20:57.378842 kubelet[2659]: E0120 02:20:57.305411 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:20:57.378842 kubelet[2659]: I0120 02:20:57.376369 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34752e74e0d884d240fd6441c2d8a1b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34752e74e0d884d240fd6441c2d8a1b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:20:57.378842 kubelet[2659]: I0120 02:20:57.376464 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34752e74e0d884d240fd6441c2d8a1b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34752e74e0d884d240fd6441c2d8a1b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:20:57.378842 kubelet[2659]: I0120 02:20:57.376496 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34752e74e0d884d240fd6441c2d8a1b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34752e74e0d884d240fd6441c2d8a1b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:20:57.378842 kubelet[2659]: I0120 02:20:57.378329 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:20:57.378842 kubelet[2659]: E0120 02:20:57.378621 2659 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 20 02:20:57.535387 systemd[1]: Created slice kubepods-burstable-pod34752e74e0d884d240fd6441c2d8a1b2.slice - libcontainer container kubepods-burstable-pod34752e74e0d884d240fd6441c2d8a1b2.slice. Jan 20 02:20:57.563664 kubelet[2659]: E0120 02:20:57.563632 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:20:57.577855 kubelet[2659]: E0120 02:20:57.575887 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:20:57.577987 containerd[1640]: time="2026-01-20T02:20:57.577149668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34752e74e0d884d240fd6441c2d8a1b2,Namespace:kube-system,Attempt:0,}" Jan 20 02:20:57.581626 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 02:20:57.590103 kubelet[2659]: E0120 02:20:57.588354 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:20:57.605482 kubelet[2659]: I0120 02:20:57.602285 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:57.605482 kubelet[2659]: I0120 02:20:57.602448 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:20:57.605482 kubelet[2659]: I0120 02:20:57.602482 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:57.605482 kubelet[2659]: I0120 02:20:57.602993 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:57.605482 kubelet[2659]: I0120 02:20:57.603365 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:57.605916 kubelet[2659]: I0120 02:20:57.603392 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:20:57.617157 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 02:20:57.622408 kubelet[2659]: E0120 02:20:57.621202 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:20:57.805470 kubelet[2659]: I0120 02:20:57.798187 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:20:57.805470 kubelet[2659]: E0120 02:20:57.802858 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:20:57.807194 kubelet[2659]: E0120 02:20:57.804788 2659 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 20 02:20:57.823593 kubelet[2659]: E0120 02:20:57.823357 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jan 20 02:20:57.840107 kubelet[2659]: E0120 02:20:57.839945 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:20:57.883460 kubelet[2659]: E0120 02:20:57.876468 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:20:57.914938 kubelet[2659]: E0120 02:20:57.914812 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:20:57.917614 containerd[1640]: time="2026-01-20T02:20:57.916368277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 02:20:57.957237 kubelet[2659]: E0120 02:20:57.955284 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:20:57.960955 containerd[1640]: time="2026-01-20T02:20:57.960904354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 02:20:58.317390 kubelet[2659]: E0120 02:20:58.317067 2659 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:20:58.620716 kubelet[2659]: I0120 02:20:58.619924 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:20:58.621502 kubelet[2659]: E0120 02:20:58.621458 2659 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 20 02:20:58.809947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696854306.mount: Deactivated successfully. Jan 20 02:20:58.906495 containerd[1640]: time="2026-01-20T02:20:58.905583538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:20:58.922260 containerd[1640]: time="2026-01-20T02:20:58.922164036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 20 02:20:58.937271 containerd[1640]: time="2026-01-20T02:20:58.937149664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:20:58.949274 containerd[1640]: time="2026-01-20T02:20:58.947759897Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:20:58.954354 containerd[1640]: time="2026-01-20T02:20:58.953824304Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:20:58.960484 containerd[1640]: time="2026-01-20T02:20:58.960233237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 02:20:58.963971 containerd[1640]: time="2026-01-20T02:20:58.963411350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 02:20:58.974912 containerd[1640]: time="2026-01-20T02:20:58.972603494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 02:20:58.974912 containerd[1640]: time="2026-01-20T02:20:58.973856272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.375465838s" Jan 20 02:20:58.983310 containerd[1640]: time="2026-01-20T02:20:58.982607886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.031170948s" Jan 20 02:20:58.985721 containerd[1640]: time="2026-01-20T02:20:58.985229480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 994.069078ms" Jan 20 02:20:59.151216 kubelet[2659]: E0120 02:20:59.143974 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:20:59.359899 containerd[1640]: time="2026-01-20T02:20:59.355484804Z" level=info msg="connecting to shim 39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac" address="unix:///run/containerd/s/32b8188b67ea639612ce43054422b7793c18eaa1371062e87392b043712ef6b9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:20:59.386246 containerd[1640]: time="2026-01-20T02:20:59.386179759Z" level=info msg="connecting to shim 86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a" address="unix:///run/containerd/s/7d07c31cd51b3048f3786af164fd1c24aeb0dcfafdb7f4650fe4c7891b93ef8e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:20:59.430307 kubelet[2659]: E0120 02:20:59.430186 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="3.2s" Jan 20 02:20:59.452179 containerd[1640]: time="2026-01-20T02:20:59.452020821Z" level=info msg="connecting to shim 35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a" address="unix:///run/containerd/s/d3c2171262e3e4df2eb76152e3733ae9c22476785b1c4bc19387a26f34ca0c68" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:20:59.670719 kubelet[2659]: E0120 02:20:59.669004 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:20:59.744576 systemd[1]: Started cri-containerd-39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac.scope - libcontainer container 39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac. Jan 20 02:20:59.838389 systemd[1]: Started cri-containerd-35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a.scope - libcontainer container 35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a. Jan 20 02:20:59.854721 systemd[1]: Started cri-containerd-86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a.scope - libcontainer container 86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a. Jan 20 02:20:59.878000 audit: BPF prog-id=81 op=LOAD Jan 20 02:20:59.879000 audit: BPF prog-id=82 op=LOAD Jan 20 02:20:59.879000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.879000 audit: BPF prog-id=82 op=UNLOAD Jan 20 02:20:59.879000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.879000 audit: BPF prog-id=83 op=LOAD Jan 20 02:20:59.879000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.885000 audit: BPF prog-id=84 op=LOAD Jan 20 02:20:59.885000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.885000 audit: BPF prog-id=84 op=UNLOAD Jan 20 02:20:59.885000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.885000 audit: BPF prog-id=83 op=UNLOAD Jan 20 02:20:59.885000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.885000 audit: BPF prog-id=85 op=LOAD Jan 20 02:20:59.885000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2719 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339333834643334613034643233363831653430303035663664333430 Jan 20 02:20:59.930000 audit: BPF prog-id=86 op=LOAD Jan 20 02:20:59.944923 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 20 02:20:59.945028 kernel: audit: type=1334 audit(1768875659.930:362): prog-id=86 op=LOAD Jan 20 02:20:59.963731 kernel: audit: type=1334 audit(1768875659.930:363): prog-id=87 op=LOAD Jan 20 02:20:59.930000 audit: BPF prog-id=87 op=LOAD Jan 20 02:20:59.930000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:00.005885 kernel: audit: type=1300 audit(1768875659.930:363): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:21:00.025611 kernel: audit: type=1327 audit(1768875659.930:363): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:21:00.025762 kernel: audit: type=1334 audit(1768875659.930:364): prog-id=87 op=UNLOAD Jan 20 02:20:59.930000 audit: BPF prog-id=87 op=UNLOAD Jan 20 02:21:00.058069 kernel: audit: type=1300 audit(1768875659.930:364): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.930000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:21:00.161392 kernel: audit: type=1327 audit(1768875659.930:364): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:21:00.161630 kernel: audit: type=1334 audit(1768875659.930:365): prog-id=88 op=LOAD Jan 20 02:20:59.930000 audit: BPF prog-id=88 op=LOAD Jan 20 02:21:00.164414 kubelet[2659]: E0120 02:21:00.164350 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:21:00.174600 kernel: audit: type=1300 audit(1768875659.930:365): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.930000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:00.240055 kernel: audit: type=1327 audit(1768875659.930:365): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:20:59.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:21:00.275679 kubelet[2659]: I0120 02:21:00.274774 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:21:00.280446 kubelet[2659]: E0120 02:21:00.280384 2659 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 20 02:20:59.930000 audit: BPF prog-id=89 op=LOAD Jan 20 02:20:59.930000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:20:59.938000 audit: BPF prog-id=89 op=UNLOAD Jan 20 02:20:59.938000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:20:59.938000 audit: BPF prog-id=88 op=UNLOAD Jan 20 02:20:59.938000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:20:59.938000 audit: BPF prog-id=90 op=LOAD Jan 20 02:20:59.938000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2726 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836623663303864653932663434323562656239613732646338353562 Jan 20 02:20:59.954000 audit: BPF prog-id=91 op=LOAD Jan 20 02:20:59.954000 audit: BPF prog-id=92 op=LOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:20:59.954000 audit: BPF prog-id=92 op=UNLOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:20:59.954000 audit: BPF prog-id=93 op=LOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:20:59.954000 audit: BPF prog-id=94 op=LOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:20:59.954000 audit: BPF prog-id=94 op=UNLOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:20:59.954000 audit: BPF prog-id=93 op=UNLOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:20:59.954000 audit: BPF prog-id=95 op=LOAD Jan 20 02:20:59.954000 audit[2779]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2738 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:20:59.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335323636666663626665666239316439396337336162623338396636 Jan 20 02:21:00.343953 containerd[1640]: time="2026-01-20T02:21:00.339255579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac\"" Jan 20 02:21:00.356125 kubelet[2659]: E0120 02:21:00.353502 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:00.456203 containerd[1640]: time="2026-01-20T02:21:00.455958635Z" level=info msg="CreateContainer within sandbox \"39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 02:21:00.469935 containerd[1640]: time="2026-01-20T02:21:00.469798132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a\"" Jan 20 02:21:00.471311 kubelet[2659]: E0120 02:21:00.471030 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:00.507217 containerd[1640]: time="2026-01-20T02:21:00.503197064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34752e74e0d884d240fd6441c2d8a1b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a\"" Jan 20 02:21:00.511597 kubelet[2659]: E0120 02:21:00.511326 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:00.536087 containerd[1640]: time="2026-01-20T02:21:00.535699821Z" level=info msg="CreateContainer within sandbox \"35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 02:21:00.551739 containerd[1640]: time="2026-01-20T02:21:00.551470393Z" level=info msg="CreateContainer within sandbox \"86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 02:21:00.613625 kubelet[2659]: E0120 02:21:00.613408 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:21:00.694144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840567881.mount: Deactivated successfully. Jan 20 02:21:00.724900 containerd[1640]: time="2026-01-20T02:21:00.715899625Z" level=info msg="Container abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:00.753729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126744883.mount: Deactivated successfully. Jan 20 02:21:00.793744 containerd[1640]: time="2026-01-20T02:21:00.777282667Z" level=info msg="CreateContainer within sandbox \"39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7\"" Jan 20 02:21:00.793744 containerd[1640]: time="2026-01-20T02:21:00.789940954Z" level=info msg="Container bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:00.801429 containerd[1640]: time="2026-01-20T02:21:00.799789513Z" level=info msg="StartContainer for \"abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7\"" Jan 20 02:21:00.829354 containerd[1640]: time="2026-01-20T02:21:00.829293405Z" level=info msg="connecting to shim abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7" address="unix:///run/containerd/s/32b8188b67ea639612ce43054422b7793c18eaa1371062e87392b043712ef6b9" protocol=ttrpc version=3 Jan 20 02:21:00.832796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082459310.mount: Deactivated successfully. Jan 20 02:21:00.857934 containerd[1640]: time="2026-01-20T02:21:00.852495429Z" level=info msg="Container 32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:00.932943 containerd[1640]: time="2026-01-20T02:21:00.931390387Z" level=info msg="CreateContainer within sandbox \"35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9\"" Jan 20 02:21:00.940971 containerd[1640]: time="2026-01-20T02:21:00.940811351Z" level=info msg="StartContainer for \"bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9\"" Jan 20 02:21:00.943926 containerd[1640]: time="2026-01-20T02:21:00.943332976Z" level=info msg="connecting to shim bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9" address="unix:///run/containerd/s/d3c2171262e3e4df2eb76152e3733ae9c22476785b1c4bc19387a26f34ca0c68" protocol=ttrpc version=3 Jan 20 02:21:00.998391 containerd[1640]: time="2026-01-20T02:21:00.995870425Z" level=info msg="CreateContainer within sandbox \"86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942\"" Jan 20 02:21:00.998391 containerd[1640]: time="2026-01-20T02:21:00.996789214Z" level=info msg="StartContainer for \"32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942\"" Jan 20 02:21:01.016146 containerd[1640]: time="2026-01-20T02:21:01.015412585Z" level=info msg="connecting to shim 32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942" address="unix:///run/containerd/s/7d07c31cd51b3048f3786af164fd1c24aeb0dcfafdb7f4650fe4c7891b93ef8e" protocol=ttrpc version=3 Jan 20 02:21:01.056440 systemd[1]: Started cri-containerd-abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7.scope - libcontainer container abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7. Jan 20 02:21:01.161759 systemd[1]: Started cri-containerd-bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9.scope - libcontainer container bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9. Jan 20 02:21:01.201000 audit: BPF prog-id=96 op=LOAD Jan 20 02:21:01.247000 audit: BPF prog-id=97 op=LOAD Jan 20 02:21:01.247000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.247000 audit: BPF prog-id=97 op=UNLOAD Jan 20 02:21:01.247000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.247427 systemd[1]: Started cri-containerd-32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942.scope - libcontainer container 32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942. Jan 20 02:21:01.252000 audit: BPF prog-id=98 op=LOAD Jan 20 02:21:01.252000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.258000 audit: BPF prog-id=99 op=LOAD Jan 20 02:21:01.258000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.258000 audit: BPF prog-id=99 op=UNLOAD Jan 20 02:21:01.258000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.258000 audit: BPF prog-id=98 op=UNLOAD Jan 20 02:21:01.258000 audit[2839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.258000 audit: BPF prog-id=100 op=LOAD Jan 20 02:21:01.258000 audit[2839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2719 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162666562376535336235333832333263646165633165663262393436 Jan 20 02:21:01.339000 audit: BPF prog-id=101 op=LOAD Jan 20 02:21:01.352000 audit: BPF prog-id=102 op=LOAD Jan 20 02:21:01.352000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.363000 audit: BPF prog-id=102 op=UNLOAD Jan 20 02:21:01.363000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.363000 audit: BPF prog-id=103 op=LOAD Jan 20 02:21:01.363000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.363000 audit: BPF prog-id=104 op=LOAD Jan 20 02:21:01.363000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.374000 audit: BPF prog-id=104 op=UNLOAD Jan 20 02:21:01.374000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.377000 audit: BPF prog-id=103 op=UNLOAD Jan 20 02:21:01.377000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.377000 audit: BPF prog-id=105 op=LOAD Jan 20 02:21:01.377000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2738 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263633030383565643735613066363161366537346131633863386233 Jan 20 02:21:01.462000 audit: BPF prog-id=106 op=LOAD Jan 20 02:21:01.470000 audit: BPF prog-id=107 op=LOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.470000 audit: BPF prog-id=107 op=UNLOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.470000 audit: BPF prog-id=108 op=LOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.470000 audit: BPF prog-id=109 op=LOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.470000 audit: BPF prog-id=109 op=UNLOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.470000 audit: BPF prog-id=108 op=UNLOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.470000 audit: BPF prog-id=110 op=LOAD Jan 20 02:21:01.470000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2726 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:01.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332643664396263623735643836323061363961346438383161663930 Jan 20 02:21:01.935745 containerd[1640]: time="2026-01-20T02:21:01.935696937Z" level=info msg="StartContainer for \"abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7\" returns successfully" Jan 20 02:21:02.057906 containerd[1640]: time="2026-01-20T02:21:02.054242790Z" level=info msg="StartContainer for \"bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9\" returns successfully" Jan 20 02:21:02.072285 kubelet[2659]: E0120 02:21:02.072199 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:02.078059 kubelet[2659]: E0120 02:21:02.077956 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:02.247672 containerd[1640]: time="2026-01-20T02:21:02.243711570Z" level=info msg="StartContainer for \"32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942\" returns successfully" Jan 20 02:21:02.412373 kubelet[2659]: E0120 02:21:02.409233 2659 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:21:02.632670 kubelet[2659]: E0120 02:21:02.632395 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="6.4s" Jan 20 02:21:03.174655 kubelet[2659]: E0120 02:21:03.166599 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:03.174655 kubelet[2659]: E0120 02:21:03.166812 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:03.198090 kubelet[2659]: E0120 02:21:03.196732 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:03.203462 kubelet[2659]: E0120 02:21:03.201304 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:03.207766 kubelet[2659]: E0120 02:21:03.207730 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:03.214258 kubelet[2659]: E0120 02:21:03.211749 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:03.490935 kubelet[2659]: I0120 02:21:03.489570 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:21:04.246717 kubelet[2659]: E0120 02:21:04.246652 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:04.247933 kubelet[2659]: E0120 02:21:04.246854 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:04.249283 kubelet[2659]: E0120 02:21:04.249047 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:04.249283 kubelet[2659]: E0120 02:21:04.249211 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:05.269439 kubelet[2659]: E0120 02:21:05.269397 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:05.281687 kubelet[2659]: E0120 02:21:05.270286 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:05.281687 kubelet[2659]: E0120 02:21:05.270760 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:05.282258 kubelet[2659]: E0120 02:21:05.282227 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:06.955069 kubelet[2659]: E0120 02:21:06.950889 2659 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:21:12.161232 kubelet[2659]: E0120 02:21:12.143985 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:12.161232 kubelet[2659]: E0120 02:21:12.144769 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:12.274427 kubelet[2659]: E0120 02:21:12.264246 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:12.274427 kubelet[2659]: E0120 02:21:12.264448 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:13.733666 kubelet[2659]: E0120 02:21:13.732930 2659 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 20 02:21:13.795710 kubelet[2659]: E0120 02:21:13.792270 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 02:21:13.814637 kubelet[2659]: E0120 02:21:13.814455 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:13.815106 kubelet[2659]: E0120 02:21:13.815046 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:13.875273 kubelet[2659]: E0120 02:21:13.867602 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:13.875273 kubelet[2659]: E0120 02:21:13.867836 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:14.246686 kubelet[2659]: E0120 02:21:14.245576 2659 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188c4f099c452353 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 02:20:56.344838995 +0000 UTC m=+1.298493841,LastTimestamp:2026-01-20 02:20:56.344838995 +0000 UTC m=+1.298493841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 02:21:14.267194 kubelet[2659]: E0120 02:21:14.261297 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 02:21:15.736628 kubelet[2659]: E0120 02:21:15.730647 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 02:21:15.736628 kubelet[2659]: E0120 02:21:15.734508 2659 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 02:21:17.032699 kubelet[2659]: E0120 02:21:16.963255 2659 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 02:21:19.063489 kubelet[2659]: E0120 02:21:19.038201 2659 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 20 02:21:20.809927 kubelet[2659]: I0120 02:21:20.798495 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:21:20.868148 kubelet[2659]: E0120 02:21:20.805160 2659 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 02:21:23.855820 kubelet[2659]: I0120 02:21:23.849360 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:21:23.855820 kubelet[2659]: E0120 02:21:23.849636 2659 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 02:21:24.047013 kubelet[2659]: E0120 02:21:24.046003 2659 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 02:21:24.047013 kubelet[2659]: E0120 02:21:24.046186 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:24.186379 kubelet[2659]: E0120 02:21:24.186203 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 02:21:24.313752 kubelet[2659]: I0120 02:21:24.311456 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:21:24.387501 kubelet[2659]: E0120 02:21:24.381826 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 02:21:24.387501 kubelet[2659]: I0120 02:21:24.381896 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:24.403300 kubelet[2659]: E0120 02:21:24.403249 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:24.403631 kubelet[2659]: I0120 02:21:24.403605 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 02:21:25.128728 kubelet[2659]: I0120 02:21:25.125373 2659 apiserver.go:52] "Watching apiserver" Jan 20 02:21:25.191001 kubelet[2659]: I0120 02:21:25.190908 2659 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 02:21:25.195936 kubelet[2659]: E0120 02:21:25.191930 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:32.111855 kubelet[2659]: E0120 02:21:32.111094 2659 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.223s" Jan 20 02:21:43.060138 kubelet[2659]: I0120 02:21:43.055268 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 02:21:43.081307 systemd[1]: Reload requested from client PID 2963 ('systemctl') (unit session-10.scope)... Jan 20 02:21:43.081339 systemd[1]: Reloading... Jan 20 02:21:43.166612 kubelet[2659]: I0120 02:21:43.154912 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=19.154866824 podStartE2EDuration="19.154866824s" podCreationTimestamp="2026-01-20 02:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:21:27.497666787 +0000 UTC m=+32.451321641" watchObservedRunningTime="2026-01-20 02:21:43.154866824 +0000 UTC m=+48.108521669" Jan 20 02:21:43.166612 kubelet[2659]: E0120 02:21:43.161363 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:44.068280 kubelet[2659]: E0120 02:21:44.032990 2659 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:44.726419 zram_generator::config[3007]: No configuration found. Jan 20 02:21:46.994123 systemd[1]: Reloading finished in 3912 ms. Jan 20 02:21:47.191649 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:21:47.265410 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 02:21:47.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:21:47.273701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:21:47.284968 systemd[1]: kubelet.service: Consumed 5.263s CPU time, 132M memory peak. Jan 20 02:21:47.299603 kernel: kauditd_printk_skb: 100 callbacks suppressed Jan 20 02:21:47.299777 kernel: audit: type=1131 audit(1768875707.272:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:21:47.323388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 02:21:47.362199 kernel: audit: type=1334 audit(1768875707.345:403): prog-id=111 op=LOAD Jan 20 02:21:47.345000 audit: BPF prog-id=111 op=LOAD Jan 20 02:21:47.345000 audit: BPF prog-id=77 op=UNLOAD Jan 20 02:21:47.346000 audit: BPF prog-id=112 op=LOAD Jan 20 02:21:47.346000 audit: BPF prog-id=113 op=LOAD Jan 20 02:21:47.346000 audit: BPF prog-id=78 op=UNLOAD Jan 20 02:21:47.346000 audit: BPF prog-id=79 op=UNLOAD Jan 20 02:21:47.346000 audit: BPF prog-id=114 op=LOAD Jan 20 02:21:47.346000 audit: BPF prog-id=80 op=UNLOAD Jan 20 02:21:47.369000 audit: BPF prog-id=115 op=LOAD Jan 20 02:21:47.369000 audit: BPF prog-id=76 op=UNLOAD Jan 20 02:21:47.369000 audit: BPF prog-id=116 op=LOAD Jan 20 02:21:47.369000 audit: BPF prog-id=61 op=UNLOAD Jan 20 02:21:47.369000 audit: BPF prog-id=117 op=LOAD Jan 20 02:21:47.369000 audit: BPF prog-id=118 op=LOAD Jan 20 02:21:47.369000 audit: BPF prog-id=62 op=UNLOAD Jan 20 02:21:47.369000 audit: BPF prog-id=63 op=UNLOAD Jan 20 02:21:47.387605 kernel: audit: type=1334 audit(1768875707.345:404): prog-id=77 op=UNLOAD Jan 20 02:21:47.387682 kernel: audit: type=1334 audit(1768875707.346:405): prog-id=112 op=LOAD Jan 20 02:21:47.387726 kernel: audit: type=1334 audit(1768875707.346:406): prog-id=113 op=LOAD Jan 20 02:21:47.387766 kernel: audit: type=1334 audit(1768875707.346:407): prog-id=78 op=UNLOAD Jan 20 02:21:47.387796 kernel: audit: type=1334 audit(1768875707.346:408): prog-id=79 op=UNLOAD Jan 20 02:21:47.387842 kernel: audit: type=1334 audit(1768875707.346:409): prog-id=114 op=LOAD Jan 20 02:21:47.387879 kernel: audit: type=1334 audit(1768875707.346:410): prog-id=80 op=UNLOAD Jan 20 02:21:47.387916 kernel: audit: type=1334 audit(1768875707.369:411): prog-id=115 op=LOAD Jan 20 02:21:47.395000 audit: BPF prog-id=119 op=LOAD Jan 20 02:21:47.395000 audit: BPF prog-id=70 op=UNLOAD Jan 20 02:21:47.395000 audit: BPF prog-id=120 op=LOAD Jan 20 02:21:47.395000 audit: BPF prog-id=121 op=LOAD Jan 20 02:21:47.395000 audit: BPF prog-id=71 op=UNLOAD Jan 20 02:21:47.395000 audit: BPF prog-id=72 op=UNLOAD Jan 20 02:21:47.419000 audit: BPF prog-id=122 op=LOAD Jan 20 02:21:47.419000 audit: BPF prog-id=64 op=UNLOAD Jan 20 02:21:47.419000 audit: BPF prog-id=123 op=LOAD Jan 20 02:21:47.419000 audit: BPF prog-id=65 op=UNLOAD Jan 20 02:21:47.432000 audit: BPF prog-id=124 op=LOAD Jan 20 02:21:47.432000 audit: BPF prog-id=125 op=LOAD Jan 20 02:21:47.432000 audit: BPF prog-id=66 op=UNLOAD Jan 20 02:21:47.432000 audit: BPF prog-id=67 op=UNLOAD Jan 20 02:21:47.433000 audit: BPF prog-id=126 op=LOAD Jan 20 02:21:47.451000 audit: BPF prog-id=127 op=LOAD Jan 20 02:21:47.451000 audit: BPF prog-id=68 op=UNLOAD Jan 20 02:21:47.459000 audit: BPF prog-id=69 op=UNLOAD Jan 20 02:21:47.467000 audit: BPF prog-id=128 op=LOAD Jan 20 02:21:47.467000 audit: BPF prog-id=73 op=UNLOAD Jan 20 02:21:47.469000 audit: BPF prog-id=129 op=LOAD Jan 20 02:21:47.474000 audit: BPF prog-id=130 op=LOAD Jan 20 02:21:47.474000 audit: BPF prog-id=74 op=UNLOAD Jan 20 02:21:47.474000 audit: BPF prog-id=75 op=UNLOAD Jan 20 02:21:48.885718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 02:21:48.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:21:49.041267 (kubelet)[3053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 02:21:49.755099 kubelet[3053]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 02:21:49.759737 kubelet[3053]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 02:21:49.759737 kubelet[3053]: I0120 02:21:49.755652 3053 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 02:21:49.811060 kubelet[3053]: I0120 02:21:49.811005 3053 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 02:21:49.811273 kubelet[3053]: I0120 02:21:49.811259 3053 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 02:21:49.811404 kubelet[3053]: I0120 02:21:49.811387 3053 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 02:21:49.811599 kubelet[3053]: I0120 02:21:49.811580 3053 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 02:21:49.811999 kubelet[3053]: I0120 02:21:49.811979 3053 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 02:21:49.830113 kubelet[3053]: I0120 02:21:49.830053 3053 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 02:21:49.876621 kubelet[3053]: I0120 02:21:49.873617 3053 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 02:21:50.002798 kubelet[3053]: I0120 02:21:50.002678 3053 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 02:21:50.117983 kubelet[3053]: I0120 02:21:50.115030 3053 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 02:21:50.117983 kubelet[3053]: I0120 02:21:50.115367 3053 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 02:21:50.117983 kubelet[3053]: I0120 02:21:50.115411 3053 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 02:21:50.117983 kubelet[3053]: I0120 02:21:50.115716 3053 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 02:21:50.118388 kubelet[3053]: I0120 02:21:50.115733 3053 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 02:21:50.118388 kubelet[3053]: I0120 02:21:50.115769 3053 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 02:21:50.118388 kubelet[3053]: I0120 02:21:50.116690 3053 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:21:50.130993 kubelet[3053]: I0120 02:21:50.130944 3053 kubelet.go:475] "Attempting to sync node with API server" Jan 20 02:21:50.131217 kubelet[3053]: I0120 02:21:50.131203 3053 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 02:21:50.131309 kubelet[3053]: I0120 02:21:50.131298 3053 kubelet.go:387] "Adding apiserver pod source" Jan 20 02:21:50.131404 kubelet[3053]: I0120 02:21:50.131391 3053 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 02:21:50.158956 kubelet[3053]: I0120 02:21:50.145354 3053 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 02:21:50.158956 kubelet[3053]: I0120 02:21:50.153212 3053 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 02:21:50.158956 kubelet[3053]: I0120 02:21:50.153259 3053 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 02:21:50.222753 kubelet[3053]: I0120 02:21:50.221655 3053 server.go:1262] "Started kubelet" Jan 20 02:21:50.223234 kubelet[3053]: I0120 02:21:50.223188 3053 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 02:21:50.238734 kubelet[3053]: I0120 02:21:50.235787 3053 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 02:21:50.288322 kubelet[3053]: I0120 02:21:50.239326 3053 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 02:21:50.288947 kubelet[3053]: I0120 02:21:50.258889 3053 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 02:21:50.289431 kubelet[3053]: I0120 02:21:50.228773 3053 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 02:21:50.335833 kubelet[3053]: I0120 02:21:50.259105 3053 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 02:21:50.336810 kubelet[3053]: I0120 02:21:50.336781 3053 server.go:310] "Adding debug handlers to kubelet server" Jan 20 02:21:50.338142 kubelet[3053]: I0120 02:21:50.338117 3053 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 02:21:50.349780 kubelet[3053]: I0120 02:21:50.346970 3053 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 02:21:50.350692 kubelet[3053]: I0120 02:21:50.350514 3053 reconciler.go:29] "Reconciler: start to sync state" Jan 20 02:21:50.373149 kubelet[3053]: E0120 02:21:50.372102 3053 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 02:21:50.404053 kubelet[3053]: I0120 02:21:50.403999 3053 factory.go:223] Registration of the systemd container factory successfully Jan 20 02:21:50.404235 kubelet[3053]: I0120 02:21:50.404160 3053 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 02:21:50.440282 kubelet[3053]: I0120 02:21:50.439965 3053 factory.go:223] Registration of the containerd container factory successfully Jan 20 02:21:50.496937 kubelet[3053]: I0120 02:21:50.495306 3053 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 02:21:50.512637 kubelet[3053]: I0120 02:21:50.512582 3053 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 02:21:50.512637 kubelet[3053]: I0120 02:21:50.512622 3053 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 02:21:50.512868 kubelet[3053]: I0120 02:21:50.512659 3053 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 02:21:50.512868 kubelet[3053]: E0120 02:21:50.512731 3053 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 02:21:50.612948 kubelet[3053]: E0120 02:21:50.612833 3053 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796329 3053 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796371 3053 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796407 3053 state_mem.go:36] "Initialized new in-memory state store" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796711 3053 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796730 3053 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796758 3053 policy_none.go:49] "None policy: Start" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796775 3053 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796794 3053 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796936 3053 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 02:21:50.797593 kubelet[3053]: I0120 02:21:50.796949 3053 policy_none.go:47] "Start" Jan 20 02:21:50.825299 kubelet[3053]: E0120 02:21:50.822241 3053 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 02:21:50.867196 kubelet[3053]: E0120 02:21:50.866313 3053 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 02:21:50.868942 kubelet[3053]: I0120 02:21:50.868112 3053 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 02:21:50.868942 kubelet[3053]: I0120 02:21:50.868140 3053 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 02:21:50.868942 kubelet[3053]: I0120 02:21:50.868473 3053 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 02:21:50.894219 kubelet[3053]: I0120 02:21:50.891644 3053 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 02:21:50.894219 kubelet[3053]: E0120 02:21:50.893613 3053 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 02:21:50.903824 containerd[1640]: time="2026-01-20T02:21:50.896043282Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 02:21:50.908505 kubelet[3053]: I0120 02:21:50.907866 3053 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 02:21:51.059361 kubelet[3053]: I0120 02:21:51.059079 3053 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 02:21:51.150427 kubelet[3053]: I0120 02:21:51.147018 3053 apiserver.go:52] "Watching apiserver" Jan 20 02:21:51.212776 kubelet[3053]: I0120 02:21:51.205699 3053 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 02:21:51.212776 kubelet[3053]: I0120 02:21:51.205824 3053 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 02:21:51.226048 kubelet[3053]: I0120 02:21:51.226012 3053 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:51.256687 kubelet[3053]: I0120 02:21:51.253675 3053 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 02:21:51.296940 kubelet[3053]: I0120 02:21:51.293431 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 02:21:51.296940 kubelet[3053]: I0120 02:21:51.293514 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt7nz\" (UniqueName: \"kubernetes.io/projected/a2190f5c-4922-47fe-aeab-15385e8d41aa-kube-api-access-xt7nz\") pod \"kube-proxy-hfqvg\" (UID: \"a2190f5c-4922-47fe-aeab-15385e8d41aa\") " pod="kube-system/kube-proxy-hfqvg" Jan 20 02:21:51.296940 kubelet[3053]: I0120 02:21:51.293611 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34752e74e0d884d240fd6441c2d8a1b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34752e74e0d884d240fd6441c2d8a1b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:21:51.296940 kubelet[3053]: I0120 02:21:51.293634 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:51.296940 kubelet[3053]: I0120 02:21:51.293661 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2190f5c-4922-47fe-aeab-15385e8d41aa-kube-proxy\") pod \"kube-proxy-hfqvg\" (UID: \"a2190f5c-4922-47fe-aeab-15385e8d41aa\") " pod="kube-system/kube-proxy-hfqvg" Jan 20 02:21:51.297246 kubelet[3053]: I0120 02:21:51.293714 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2190f5c-4922-47fe-aeab-15385e8d41aa-xtables-lock\") pod \"kube-proxy-hfqvg\" (UID: \"a2190f5c-4922-47fe-aeab-15385e8d41aa\") " pod="kube-system/kube-proxy-hfqvg" Jan 20 02:21:51.297246 kubelet[3053]: I0120 02:21:51.293738 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2190f5c-4922-47fe-aeab-15385e8d41aa-lib-modules\") pod \"kube-proxy-hfqvg\" (UID: \"a2190f5c-4922-47fe-aeab-15385e8d41aa\") " pod="kube-system/kube-proxy-hfqvg" Jan 20 02:21:51.297246 kubelet[3053]: I0120 02:21:51.293762 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34752e74e0d884d240fd6441c2d8a1b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34752e74e0d884d240fd6441c2d8a1b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:21:51.297246 kubelet[3053]: I0120 02:21:51.293782 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34752e74e0d884d240fd6441c2d8a1b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34752e74e0d884d240fd6441c2d8a1b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 02:21:51.297246 kubelet[3053]: I0120 02:21:51.293803 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:51.297414 kubelet[3053]: I0120 02:21:51.293823 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:51.297414 kubelet[3053]: I0120 02:21:51.293842 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:51.297414 kubelet[3053]: I0120 02:21:51.293864 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 02:21:51.330809 systemd[1]: Created slice kubepods-besteffort-poda2190f5c_4922_47fe_aeab_15385e8d41aa.slice - libcontainer container kubepods-besteffort-poda2190f5c_4922_47fe_aeab_15385e8d41aa.slice. Jan 20 02:21:51.529830 kubelet[3053]: E0120 02:21:51.528889 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.541598 kubelet[3053]: E0120 02:21:51.536171 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.545130 systemd[1713]: Created slice background.slice - User Background Tasks Slice. Jan 20 02:21:51.561418 systemd[1713]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 02:21:51.627142 kubelet[3053]: E0120 02:21:51.624168 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.654327 kubelet[3053]: E0120 02:21:51.654284 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.674098 kubelet[3053]: E0120 02:21:51.665230 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.674098 kubelet[3053]: E0120 02:21:51.667670 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.776710 kubelet[3053]: I0120 02:21:51.771052 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.74745623 podStartE2EDuration="747.45623ms" podCreationTimestamp="2026-01-20 02:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:21:51.733873362 +0000 UTC m=+2.632515571" watchObservedRunningTime="2026-01-20 02:21:51.74745623 +0000 UTC m=+2.646098430" Jan 20 02:21:51.776710 kubelet[3053]: E0120 02:21:51.772980 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:51.787563 containerd[1640]: time="2026-01-20T02:21:51.783139338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hfqvg,Uid:a2190f5c-4922-47fe-aeab-15385e8d41aa,Namespace:kube-system,Attempt:0,}" Jan 20 02:21:51.844866 systemd[1713]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 02:21:52.235835 containerd[1640]: time="2026-01-20T02:21:52.227689512Z" level=info msg="connecting to shim 48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb" address="unix:///run/containerd/s/8d6122af3108fa9f1cfd099f6d4aa774c25d82c5cfe33fbd1f97e8303689856a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:21:55.305251 kubelet[3053]: E0120 02:21:55.300205 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:55.309102 kubelet[3053]: E0120 02:21:55.309073 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:55.328795 kubelet[3053]: E0120 02:21:55.328753 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.566s" Jan 20 02:21:55.416269 systemd[1]: Started cri-containerd-48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb.scope - libcontainer container 48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb. Jan 20 02:21:56.006000 audit: BPF prog-id=131 op=LOAD Jan 20 02:21:56.028932 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 02:21:56.029103 kernel: audit: type=1334 audit(1768875716.006:444): prog-id=131 op=LOAD Jan 20 02:21:56.114604 kernel: audit: type=1334 audit(1768875716.083:445): prog-id=132 op=LOAD Jan 20 02:21:56.083000 audit: BPF prog-id=132 op=LOAD Jan 20 02:21:56.083000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.235952 kernel: audit: type=1300 audit(1768875716.083:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.236100 kernel: audit: type=1327 audit(1768875716.083:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.108000 audit: BPF prog-id=132 op=UNLOAD Jan 20 02:21:56.272731 kernel: audit: type=1334 audit(1768875716.108:446): prog-id=132 op=UNLOAD Jan 20 02:21:56.356055 kernel: audit: type=1300 audit(1768875716.108:446): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.108000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.410272 kubelet[3053]: E0120 02:21:56.374224 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:56.410272 kubelet[3053]: E0120 02:21:56.408457 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:56.414722 kernel: audit: type=1327 audit(1768875716.108:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.114000 audit: BPF prog-id=133 op=LOAD Jan 20 02:21:56.480638 kernel: audit: type=1334 audit(1768875716.114:447): prog-id=133 op=LOAD Jan 20 02:21:56.480818 kernel: audit: type=1300 audit(1768875716.114:447): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.114000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.114000 audit: BPF prog-id=134 op=LOAD Jan 20 02:21:56.519578 kernel: audit: type=1327 audit(1768875716.114:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.114000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.114000 audit: BPF prog-id=134 op=UNLOAD Jan 20 02:21:56.114000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.114000 audit: BPF prog-id=133 op=UNLOAD Jan 20 02:21:56.114000 audit[3125]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.114000 audit: BPF prog-id=135 op=LOAD Jan 20 02:21:56.114000 audit[3125]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3113 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:56.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653639393666333131373932646261653434363061303235363337 Jan 20 02:21:56.749729 containerd[1640]: time="2026-01-20T02:21:56.749678539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hfqvg,Uid:a2190f5c-4922-47fe-aeab-15385e8d41aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb\"" Jan 20 02:21:56.755322 kubelet[3053]: E0120 02:21:56.754932 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:56.853207 containerd[1640]: time="2026-01-20T02:21:56.853098226Z" level=info msg="CreateContainer within sandbox \"48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 02:21:57.053443 containerd[1640]: time="2026-01-20T02:21:57.032872294Z" level=info msg="Container fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:21:57.151662 containerd[1640]: time="2026-01-20T02:21:57.151384735Z" level=info msg="CreateContainer within sandbox \"48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a\"" Jan 20 02:21:57.186595 containerd[1640]: time="2026-01-20T02:21:57.177990806Z" level=info msg="StartContainer for \"fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a\"" Jan 20 02:21:57.186595 containerd[1640]: time="2026-01-20T02:21:57.180588930Z" level=info msg="connecting to shim fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a" address="unix:///run/containerd/s/8d6122af3108fa9f1cfd099f6d4aa774c25d82c5cfe33fbd1f97e8303689856a" protocol=ttrpc version=3 Jan 20 02:21:57.536877 systemd[1]: Started cri-containerd-fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a.scope - libcontainer container fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a. Jan 20 02:21:57.714040 kubelet[3053]: E0120 02:21:57.713934 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:57.729197 kubelet[3053]: E0120 02:21:57.722873 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:58.220000 audit: BPF prog-id=136 op=LOAD Jan 20 02:21:58.220000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3113 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:58.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663353239363332303333323230613963663338633239656438653536 Jan 20 02:21:58.223000 audit: BPF prog-id=137 op=LOAD Jan 20 02:21:58.223000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3113 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:58.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663353239363332303333323230613963663338633239656438653536 Jan 20 02:21:58.223000 audit: BPF prog-id=137 op=UNLOAD Jan 20 02:21:58.223000 audit[3151]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:58.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663353239363332303333323230613963663338633239656438653536 Jan 20 02:21:58.223000 audit: BPF prog-id=136 op=UNLOAD Jan 20 02:21:58.223000 audit[3151]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3113 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:58.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663353239363332303333323230613963663338633239656438653536 Jan 20 02:21:58.223000 audit: BPF prog-id=138 op=LOAD Jan 20 02:21:58.223000 audit[3151]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3113 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:21:58.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663353239363332303333323230613963663338633239656438653536 Jan 20 02:21:58.462604 kubelet[3053]: E0120 02:21:58.418032 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:58.759759 kubelet[3053]: E0120 02:21:58.757768 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:21:58.823425 containerd[1640]: time="2026-01-20T02:21:58.823209768Z" level=info msg="StartContainer for \"fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a\" returns successfully" Jan 20 02:21:59.555366 update_engine[1626]: I20260120 02:21:59.537428 1626 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 02:21:59.556443 update_engine[1626]: I20260120 02:21:59.556403 1626 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 02:21:59.557040 update_engine[1626]: I20260120 02:21:59.557011 1626 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 02:21:59.558045 update_engine[1626]: I20260120 02:21:59.558012 1626 omaha_request_params.cc:62] Current group set to alpha Jan 20 02:21:59.558321 update_engine[1626]: I20260120 02:21:59.558292 1626 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 02:21:59.558418 update_engine[1626]: I20260120 02:21:59.558395 1626 update_attempter.cc:643] Scheduling an action processor start. Jan 20 02:21:59.558601 update_engine[1626]: I20260120 02:21:59.558506 1626 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:21:59.558736 update_engine[1626]: I20260120 02:21:59.558709 1626 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 02:21:59.558947 update_engine[1626]: I20260120 02:21:59.558921 1626 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:21:59.559030 update_engine[1626]: I20260120 02:21:59.559008 1626 omaha_request_action.cc:272] Request: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559030 update_engine[1626]: Jan 20 02:21:59.559330 update_engine[1626]: I20260120 02:21:59.559306 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:21:59.606770 update_engine[1626]: I20260120 02:21:59.606705 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:21:59.614771 update_engine[1626]: I20260120 02:21:59.614064 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:21:59.617322 locksmithd[1691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 02:21:59.657159 update_engine[1626]: E20260120 02:21:59.657053 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:21:59.657720 update_engine[1626]: I20260120 02:21:59.657627 1626 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 02:21:59.832998 kubelet[3053]: E0120 02:21:59.832857 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:22:00.008818 kubelet[3053]: I0120 02:22:00.004754 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hfqvg" podStartSLOduration=10.004730478 podStartE2EDuration="10.004730478s" podCreationTimestamp="2026-01-20 02:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:22:00.004236226 +0000 UTC m=+10.902878445" watchObservedRunningTime="2026-01-20 02:22:00.004730478 +0000 UTC m=+10.903372687" Jan 20 02:22:00.228000 audit[3224]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.228000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeebc7eff0 a2=0 a3=7ffeebc7efdc items=0 ppid=3166 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 02:22:00.240000 audit[3225]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:00.240000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff816b4460 a2=0 a3=7fff816b444c items=0 ppid=3166 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.251000 audit[3226]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.251000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeecfcd130 a2=0 a3=7ffeecfcd11c items=0 ppid=3166 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 02:22:00.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 02:22:00.280000 audit[3227]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.280000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5a8e3fd0 a2=0 a3=7fff5a8e3fbc items=0 ppid=3166 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 02:22:00.307000 audit[3228]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:00.307000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff85c0f220 a2=0 a3=7fff85c0f20c items=0 ppid=3166 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 02:22:00.318000 audit[3234]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:00.318000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd93a64b0 a2=0 a3=7ffdd93a649c items=0 ppid=3166 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 02:22:00.390000 audit[3235]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.390000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffb75e8e10 a2=0 a3=7fffb75e8dfc items=0 ppid=3166 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 02:22:00.430000 audit[3238]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.430000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc1c383560 a2=0 a3=7ffc1c38354c items=0 ppid=3166 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 20 02:22:00.504000 audit[3246]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.504000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd429bc280 a2=0 a3=7ffd429bc26c items=0 ppid=3166 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 20 02:22:00.531000 audit[3247]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.531000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe16926f20 a2=0 a3=7ffe16926f0c items=0 ppid=3166 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.531000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 02:22:00.808000 audit[3249]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.808000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb1a7ce30 a2=0 a3=7ffdb1a7ce1c items=0 ppid=3166 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 02:22:00.848723 kubelet[3053]: E0120 02:22:00.835514 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:22:00.867000 audit[3250]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:00.867000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe600a61d0 a2=0 a3=7ffe600a61bc items=0 ppid=3166 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:00.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 02:22:00.940869 systemd[1]: Created slice kubepods-besteffort-pod4b25ee5d_051a_4042_925d_73c4e50423f9.slice - libcontainer container kubepods-besteffort-pod4b25ee5d_051a_4042_925d_73c4e50423f9.slice. Jan 20 02:22:01.028859 kubelet[3053]: I0120 02:22:00.984183 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b25ee5d-051a-4042-925d-73c4e50423f9-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-8g9dk\" (UID: \"4b25ee5d-051a-4042-925d-73c4e50423f9\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8g9dk" Jan 20 02:22:01.028859 kubelet[3053]: I0120 02:22:00.984753 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82mc\" (UniqueName: \"kubernetes.io/projected/4b25ee5d-051a-4042-925d-73c4e50423f9-kube-api-access-p82mc\") pod \"tigera-operator-65cdcdfd6d-8g9dk\" (UID: \"4b25ee5d-051a-4042-925d-73c4e50423f9\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8g9dk" Jan 20 02:22:01.282000 audit[3252]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.307074 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 20 02:22:01.307248 kernel: audit: type=1325 audit(1768875721.282:469): table=filter:66 family=2 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.282000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd11de7ad0 a2=0 a3=7ffd11de7abc items=0 ppid=3166 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.463861 kernel: audit: type=1300 audit(1768875721.282:469): arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd11de7ad0 a2=0 a3=7ffd11de7abc items=0 ppid=3166 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:01.564011 kernel: audit: type=1327 audit(1768875721.282:469): proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:01.722000 audit[3257]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.754770 containerd[1640]: time="2026-01-20T02:22:01.754715287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8g9dk,Uid:4b25ee5d-051a-4042-925d-73c4e50423f9,Namespace:tigera-operator,Attempt:0,}" Jan 20 02:22:01.722000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffefdc78eb0 a2=0 a3=7ffefdc78e9c items=0 ppid=3166 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.826490 kernel: audit: type=1325 audit(1768875721.722:470): table=filter:67 family=2 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.826675 kernel: audit: type=1300 audit(1768875721.722:470): arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffefdc78eb0 a2=0 a3=7ffefdc78e9c items=0 ppid=3166 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:01.914218 kernel: audit: type=1327 audit(1768875721.722:470): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:01.914363 kernel: audit: type=1325 audit(1768875721.753:471): table=filter:68 family=2 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.753000 audit[3258]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.753000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc23323c0 a2=0 a3=7fffc23323ac items=0 ppid=3166 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.003780 kernel: audit: type=1300 audit(1768875721.753:471): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc23323c0 a2=0 a3=7fffc23323ac items=0 ppid=3166 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.003964 kernel: audit: type=1327 audit(1768875721.753:471): proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 02:22:01.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 02:22:01.801000 audit[3260]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.801000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe57ec0860 a2=0 a3=7ffe57ec084c items=0 ppid=3166 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 02:22:01.809000 audit[3261]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.809000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3dfd4540 a2=0 a3=7ffc3dfd452c items=0 ppid=3166 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 02:22:01.819000 audit[3263]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.819000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6be26f70 a2=0 a3=7fff6be26f5c items=0 ppid=3166 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 20 02:22:01.878000 audit[3266]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.878000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc43c9d290 a2=0 a3=7ffc43c9d27c items=0 ppid=3166 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 20 02:22:01.937000 audit[3270]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.937000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc20e49910 a2=0 a3=7ffc20e498fc items=0 ppid=3166 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 20 02:22:02.086194 kernel: audit: type=1325 audit(1768875721.801:472): table=filter:69 family=2 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.951000 audit[3271]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.951000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc43f34b00 a2=0 a3=7ffc43f34aec items=0 ppid=3166 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.951000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 02:22:01.994000 audit[3273]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:01.994000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffefc535110 a2=0 a3=7ffefc5350fc items=0 ppid=3166 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:01.994000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:02.068000 audit[3276]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:02.068000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff621f9550 a2=0 a3=7fff621f953c items=0 ppid=3166 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:02.074000 audit[3277]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:02.074000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffea32e410 a2=0 a3=7fffea32e3fc items=0 ppid=3166 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 02:22:02.118000 audit[3279]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 02:22:02.118000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd6cc4e2a0 a2=0 a3=7ffd6cc4e28c items=0 ppid=3166 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 02:22:02.227352 containerd[1640]: time="2026-01-20T02:22:02.227212161Z" level=info msg="connecting to shim 549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c" address="unix:///run/containerd/s/643b54e12f3934942ba741d1d193676acd5739c5185d4e03cc1a546bf3211e55" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:22:02.411000 audit[3293]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:22:02.411000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc60849b60 a2=0 a3=7ffc60849b4c items=0 ppid=3166 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:22:02.430460 systemd[1]: Started cri-containerd-549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c.scope - libcontainer container 549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c. Jan 20 02:22:02.499000 audit[3293]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:22:02.499000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc60849b60 a2=0 a3=7ffc60849b4c items=0 ppid=3166 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:22:02.525000 audit: BPF prog-id=139 op=LOAD Jan 20 02:22:02.526000 audit: BPF prog-id=140 op=LOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.526000 audit: BPF prog-id=140 op=UNLOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.526000 audit: BPF prog-id=141 op=LOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.526000 audit: BPF prog-id=142 op=LOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.526000 audit: BPF prog-id=142 op=UNLOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.526000 audit: BPF prog-id=141 op=UNLOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.526000 audit: BPF prog-id=143 op=LOAD Jan 20 02:22:02.526000 audit[3305]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3292 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396664306632343335666232363361626432653366613064383561 Jan 20 02:22:02.556000 audit[3329]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.556000 audit[3329]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd694bbbf0 a2=0 a3=7ffd694bbbdc items=0 ppid=3166 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 02:22:02.607000 audit[3331]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.607000 audit[3331]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe059d2f90 a2=0 a3=7ffe059d2f7c items=0 ppid=3166 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 20 02:22:02.630000 audit[3334]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.630000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffcb63d1a0 a2=0 a3=7fffcb63d18c items=0 ppid=3166 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 20 02:22:02.645000 audit[3335]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.645000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffda38bba0 a2=0 a3=7fffda38bb8c items=0 ppid=3166 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 02:22:02.668000 audit[3338]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.668000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffede8e120 a2=0 a3=7fffede8e10c items=0 ppid=3166 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.668000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 02:22:02.680000 audit[3345]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3345 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.680000 audit[3345]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2b6f9c00 a2=0 a3=7ffc2b6f9bec items=0 ppid=3166 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 02:22:02.697000 audit[3347]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.697000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd6072bbd0 a2=0 a3=7ffd6072bbbc items=0 ppid=3166 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:02.712991 containerd[1640]: time="2026-01-20T02:22:02.707403869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8g9dk,Uid:4b25ee5d-051a-4042-925d-73c4e50423f9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c\"" Jan 20 02:22:02.718278 containerd[1640]: time="2026-01-20T02:22:02.717413271Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 02:22:02.729000 audit[3350]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.729000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff7152cc00 a2=0 a3=7fff7152cbec items=0 ppid=3166 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:02.739000 audit[3351]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.739000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd78f75470 a2=0 a3=7ffd78f7545c items=0 ppid=3166 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.739000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 02:22:02.760000 audit[3353]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.760000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe12514dc0 a2=0 a3=7ffe12514dac items=0 ppid=3166 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 02:22:02.765000 audit[3354]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.765000 audit[3354]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6b248030 a2=0 a3=7fff6b24801c items=0 ppid=3166 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 02:22:02.781000 audit[3356]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.781000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebca81090 a2=0 a3=7ffebca8107c items=0 ppid=3166 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 20 02:22:02.806000 audit[3359]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.806000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee3d3bad0 a2=0 a3=7ffee3d3babc items=0 ppid=3166 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 20 02:22:02.827000 audit[3362]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.827000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd77ef9010 a2=0 a3=7ffd77ef8ffc items=0 ppid=3166 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 20 02:22:02.830000 audit[3363]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.830000 audit[3363]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcafb428b0 a2=0 a3=7ffcafb4289c items=0 ppid=3166 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 02:22:02.852000 audit[3365]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.852000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffef3cfc1d0 a2=0 a3=7ffef3cfc1bc items=0 ppid=3166 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.852000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:02.871000 audit[3368]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.871000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda0f943b0 a2=0 a3=7ffda0f9439c items=0 ppid=3166 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 02:22:02.880000 audit[3369]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.880000 audit[3369]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4eb3aa60 a2=0 a3=7ffe4eb3aa4c items=0 ppid=3166 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 02:22:02.894000 audit[3371]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.894000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc86b3f460 a2=0 a3=7ffc86b3f44c items=0 ppid=3166 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 02:22:02.898000 audit[3372]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.898000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8af95b10 a2=0 a3=7ffd8af95afc items=0 ppid=3166 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 02:22:02.910000 audit[3374]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.910000 audit[3374]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc843468c0 a2=0 a3=7ffc843468ac items=0 ppid=3166 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:22:02.931000 audit[3377]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 02:22:02.931000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe309d9b80 a2=0 a3=7ffe309d9b6c items=0 ppid=3166 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:02.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 02:22:03.006000 audit[3379]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 02:22:03.006000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffedf160920 a2=0 a3=7ffedf16090c items=0 ppid=3166 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:03.006000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:22:03.011000 audit[3379]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 02:22:03.011000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffedf160920 a2=0 a3=7ffedf16090c items=0 ppid=3166 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:03.011000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:22:05.466866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420136150.mount: Deactivated successfully. Jan 20 02:22:09.535680 update_engine[1626]: I20260120 02:22:09.534704 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:22:09.535680 update_engine[1626]: I20260120 02:22:09.534876 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:22:09.557834 update_engine[1626]: I20260120 02:22:09.552428 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:22:09.579997 update_engine[1626]: E20260120 02:22:09.579818 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:22:09.579997 update_engine[1626]: I20260120 02:22:09.579952 1626 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 02:22:18.250911 containerd[1640]: time="2026-01-20T02:22:18.249968309Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:22:18.266476 containerd[1640]: time="2026-01-20T02:22:18.265734109Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23560844" Jan 20 02:22:18.279102 containerd[1640]: time="2026-01-20T02:22:18.279004014Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:22:18.291458 containerd[1640]: time="2026-01-20T02:22:18.291349438Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:22:18.299637 containerd[1640]: time="2026-01-20T02:22:18.297143407Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 15.579681024s" Jan 20 02:22:18.299637 containerd[1640]: time="2026-01-20T02:22:18.297219258Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 02:22:18.340294 containerd[1640]: time="2026-01-20T02:22:18.339907530Z" level=info msg="CreateContainer within sandbox \"549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 02:22:18.513386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500841946.mount: Deactivated successfully. Jan 20 02:22:18.549081 containerd[1640]: time="2026-01-20T02:22:18.545501230Z" level=info msg="Container 3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:22:18.589757 containerd[1640]: time="2026-01-20T02:22:18.589708603Z" level=info msg="CreateContainer within sandbox \"549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08\"" Jan 20 02:22:18.592757 containerd[1640]: time="2026-01-20T02:22:18.592634207Z" level=info msg="StartContainer for \"3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08\"" Jan 20 02:22:18.606373 containerd[1640]: time="2026-01-20T02:22:18.606314015Z" level=info msg="connecting to shim 3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08" address="unix:///run/containerd/s/643b54e12f3934942ba741d1d193676acd5739c5185d4e03cc1a546bf3211e55" protocol=ttrpc version=3 Jan 20 02:22:18.851225 systemd[1]: Started cri-containerd-3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08.scope - libcontainer container 3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08. Jan 20 02:22:18.975315 kernel: kauditd_printk_skb: 129 callbacks suppressed Jan 20 02:22:18.975603 kernel: audit: type=1334 audit(1768875738.967:516): prog-id=144 op=LOAD Jan 20 02:22:18.967000 audit: BPF prog-id=144 op=LOAD Jan 20 02:22:18.980000 audit: BPF prog-id=145 op=LOAD Jan 20 02:22:18.980000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:19.016501 kernel: audit: type=1334 audit(1768875738.980:517): prog-id=145 op=LOAD Jan 20 02:22:19.016715 kernel: audit: type=1300 audit(1768875738.980:517): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:19.065820 kernel: audit: type=1327 audit(1768875738.980:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:19.067122 kernel: audit: type=1334 audit(1768875738.980:518): prog-id=145 op=UNLOAD Jan 20 02:22:18.980000 audit: BPF prog-id=145 op=UNLOAD Jan 20 02:22:18.980000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:19.143200 kernel: audit: type=1300 audit(1768875738.980:518): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:19.143362 kernel: audit: type=1327 audit(1768875738.980:518): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:19.143397 kernel: audit: type=1334 audit(1768875738.980:519): prog-id=146 op=LOAD Jan 20 02:22:18.980000 audit: BPF prog-id=146 op=LOAD Jan 20 02:22:19.158721 kernel: audit: type=1300 audit(1768875738.980:519): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.980000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:19.262984 kernel: audit: type=1327 audit(1768875738.980:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:18.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:18.981000 audit: BPF prog-id=147 op=LOAD Jan 20 02:22:18.981000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:18.981000 audit: BPF prog-id=147 op=UNLOAD Jan 20 02:22:18.981000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:18.981000 audit: BPF prog-id=146 op=UNLOAD Jan 20 02:22:18.981000 audit[3390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:18.981000 audit: BPF prog-id=148 op=LOAD Jan 20 02:22:18.981000 audit[3390]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3292 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:18.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361646633366138386263633363343133366338383133306136303434 Jan 20 02:22:19.352076 containerd[1640]: time="2026-01-20T02:22:19.350221725Z" level=info msg="StartContainer for \"3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08\" returns successfully" Jan 20 02:22:19.536436 update_engine[1626]: I20260120 02:22:19.531674 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:22:19.536436 update_engine[1626]: I20260120 02:22:19.532112 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:22:19.536436 update_engine[1626]: I20260120 02:22:19.533829 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:22:19.595782 update_engine[1626]: E20260120 02:22:19.593910 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:22:19.604311 update_engine[1626]: I20260120 02:22:19.604238 1626 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:22:20.031721 kubelet[3053]: I0120 02:22:20.030456 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-8g9dk" podStartSLOduration=4.429515903 podStartE2EDuration="20.030432249s" podCreationTimestamp="2026-01-20 02:22:00 +0000 UTC" firstStartedPulling="2026-01-20 02:22:02.709239369 +0000 UTC m=+13.607881569" lastFinishedPulling="2026-01-20 02:22:18.310155715 +0000 UTC m=+29.208797915" observedRunningTime="2026-01-20 02:22:20.022839251 +0000 UTC m=+30.921481510" watchObservedRunningTime="2026-01-20 02:22:20.030432249 +0000 UTC m=+30.929074447" Jan 20 02:22:29.549902 update_engine[1626]: I20260120 02:22:29.549798 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:22:29.561033 update_engine[1626]: I20260120 02:22:29.550945 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:22:29.561033 update_engine[1626]: I20260120 02:22:29.551561 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:22:29.583617 update_engine[1626]: E20260120 02:22:29.572489 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.572714 1626 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.572734 1626 omaha_request_action.cc:617] Omaha request response: Jan 20 02:22:29.583617 update_engine[1626]: E20260120 02:22:29.573050 1626 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.573158 1626 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.573168 1626 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.573179 1626 update_attempter.cc:306] Processing Done. Jan 20 02:22:29.583617 update_engine[1626]: E20260120 02:22:29.573200 1626 update_attempter.cc:619] Update failed. Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.573211 1626 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.573221 1626 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:22:29.583617 update_engine[1626]: I20260120 02:22:29.573232 1626 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:22:29.588296 update_engine[1626]: I20260120 02:22:29.586776 1626 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:22:29.588296 update_engine[1626]: I20260120 02:22:29.586859 1626 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:22:29.588296 update_engine[1626]: I20260120 02:22:29.586874 1626 omaha_request_action.cc:272] Request: Jan 20 02:22:29.588296 update_engine[1626]: Jan 20 02:22:29.588296 update_engine[1626]: Jan 20 02:22:29.588296 update_engine[1626]: Jan 20 02:22:29.588296 update_engine[1626]: Jan 20 02:22:29.588296 update_engine[1626]: Jan 20 02:22:29.588296 update_engine[1626]: Jan 20 02:22:29.588296 update_engine[1626]: I20260120 02:22:29.586885 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:22:29.588296 update_engine[1626]: I20260120 02:22:29.586930 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:22:29.588296 update_engine[1626]: I20260120 02:22:29.587455 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:22:29.600810 locksmithd[1691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:22:29.612047 update_engine[1626]: E20260120 02:22:29.611974 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612327 1626 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612362 1626 omaha_request_action.cc:617] Omaha request response: Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612377 1626 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612387 1626 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612396 1626 update_attempter.cc:306] Processing Done. Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612406 1626 update_attempter.cc:310] Error event sent. Jan 20 02:22:29.612634 update_engine[1626]: I20260120 02:22:29.612422 1626 update_check_scheduler.cc:74] Next update check in 47m13s Jan 20 02:22:29.615724 locksmithd[1691]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:22:34.767953 systemd[1]: cri-containerd-3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08.scope: Deactivated successfully. Jan 20 02:22:34.835112 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 02:22:34.835570 kernel: audit: type=1334 audit(1768875754.782:524): prog-id=144 op=UNLOAD Jan 20 02:22:34.848838 kernel: audit: type=1334 audit(1768875754.782:525): prog-id=148 op=UNLOAD Jan 20 02:22:34.782000 audit: BPF prog-id=144 op=UNLOAD Jan 20 02:22:34.782000 audit: BPF prog-id=148 op=UNLOAD Jan 20 02:22:34.781392 systemd[1]: cri-containerd-3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08.scope: Consumed 1.265s CPU time, 37.7M memory peak. Jan 20 02:22:34.853869 containerd[1640]: time="2026-01-20T02:22:34.850365868Z" level=info msg="received container exit event container_id:\"3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08\" id:\"3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08\" pid:3403 exit_status:1 exited_at:{seconds:1768875754 nanos:768464208}" Jan 20 02:22:35.709302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08-rootfs.mount: Deactivated successfully. Jan 20 02:22:37.434467 kubelet[3053]: I0120 02:22:37.425187 3053 scope.go:117] "RemoveContainer" containerID="3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08" Jan 20 02:22:37.792141 containerd[1640]: time="2026-01-20T02:22:37.778869139Z" level=info msg="CreateContainer within sandbox \"549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 02:22:38.198477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285387038.mount: Deactivated successfully. Jan 20 02:22:38.277985 containerd[1640]: time="2026-01-20T02:22:38.277906198Z" level=info msg="Container b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:22:38.369049 containerd[1640]: time="2026-01-20T02:22:38.367974410Z" level=info msg="CreateContainer within sandbox \"549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9\"" Jan 20 02:22:38.382016 containerd[1640]: time="2026-01-20T02:22:38.381949189Z" level=info msg="StartContainer for \"b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9\"" Jan 20 02:22:38.427986 containerd[1640]: time="2026-01-20T02:22:38.424226363Z" level=info msg="connecting to shim b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9" address="unix:///run/containerd/s/643b54e12f3934942ba741d1d193676acd5739c5185d4e03cc1a546bf3211e55" protocol=ttrpc version=3 Jan 20 02:22:38.684957 systemd[1]: Started cri-containerd-b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9.scope - libcontainer container b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9. Jan 20 02:22:39.069000 audit: BPF prog-id=149 op=LOAD Jan 20 02:22:39.086718 kernel: audit: type=1334 audit(1768875759.069:526): prog-id=149 op=LOAD Jan 20 02:22:39.089000 audit: BPF prog-id=150 op=LOAD Jan 20 02:22:39.089000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.141864 kernel: audit: type=1334 audit(1768875759.089:527): prog-id=150 op=LOAD Jan 20 02:22:39.142054 kernel: audit: type=1300 audit(1768875759.089:527): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.093000 audit: BPF prog-id=150 op=UNLOAD Jan 20 02:22:39.184633 kernel: audit: type=1327 audit(1768875759.089:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.184730 kernel: audit: type=1334 audit(1768875759.093:528): prog-id=150 op=UNLOAD Jan 20 02:22:39.184797 kernel: audit: type=1300 audit(1768875759.093:528): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.093000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.202729 kernel: audit: type=1327 audit(1768875759.093:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.250692 kernel: audit: type=1334 audit(1768875759.093:529): prog-id=151 op=LOAD Jan 20 02:22:39.093000 audit: BPF prog-id=151 op=LOAD Jan 20 02:22:39.093000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.093000 audit: BPF prog-id=152 op=LOAD Jan 20 02:22:39.093000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.094000 audit: BPF prog-id=152 op=UNLOAD Jan 20 02:22:39.094000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.094000 audit: BPF prog-id=151 op=UNLOAD Jan 20 02:22:39.094000 audit[3453]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.094000 audit: BPF prog-id=153 op=LOAD Jan 20 02:22:39.094000 audit[3453]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3292 pid=3453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:22:39.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646230623039666266656163373338623132633965303561613936 Jan 20 02:22:39.520687 containerd[1640]: time="2026-01-20T02:22:39.520329591Z" level=info msg="StartContainer for \"b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9\" returns successfully" Jan 20 02:22:46.661000 audit[1890]: USER_END pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:22:46.663875 sudo[1890]: pam_unix(sudo:session): session closed for user root Jan 20 02:22:46.670691 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 20 02:22:46.670810 kernel: audit: type=1106 audit(1768875766.661:534): pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:22:46.711849 kernel: audit: type=1104 audit(1768875766.661:535): pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:22:46.661000 audit[1890]: CRED_DISP pid=1890 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 02:22:46.711200 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Jan 20 02:22:46.712666 sshd[1889]: Connection closed by 10.0.0.1 port 41850 Jan 20 02:22:46.734900 systemd-logind[1619]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:22:46.724000 audit[1885]: USER_END pid=1885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:22:46.746463 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:41850.service: Deactivated successfully. Jan 20 02:22:46.767510 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:22:46.768238 systemd[1]: session-10.scope: Consumed 18.797s CPU time, 219M memory peak. Jan 20 02:22:46.805900 systemd-logind[1619]: Removed session 10. Jan 20 02:22:46.823454 kernel: audit: type=1106 audit(1768875766.724:536): pid=1885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:22:46.823709 kernel: audit: type=1104 audit(1768875766.724:537): pid=1885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:22:46.724000 audit[1885]: CRED_DISP pid=1885 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:22:46.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.97:22-10.0.0.1:41850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:22:46.901175 kernel: audit: type=1131 audit(1768875766.741:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.97:22-10.0.0.1:41850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:23:01.089000 audit[3536]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:01.121685 kernel: audit: type=1325 audit(1768875781.089:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:01.121836 kernel: audit: type=1300 audit(1768875781.089:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffefea9b340 a2=0 a3=7ffefea9b32c items=0 ppid=3166 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:01.089000 audit[3536]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffefea9b340 a2=0 a3=7ffefea9b32c items=0 ppid=3166 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:01.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:01.201000 audit[3536]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:01.309500 kernel: audit: type=1327 audit(1768875781.089:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:01.309736 kernel: audit: type=1325 audit(1768875781.201:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:01.309776 kernel: audit: type=1300 audit(1768875781.201:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefea9b340 a2=0 a3=0 items=0 ppid=3166 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:01.201000 audit[3536]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefea9b340 a2=0 a3=0 items=0 ppid=3166 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:01.368605 kernel: audit: type=1327 audit(1768875781.201:540): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:01.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:02.426000 audit[3539]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:02.426000 audit[3539]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffece11ad90 a2=0 a3=7ffece11ad7c items=0 ppid=3166 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:02.505657 kernel: audit: type=1325 audit(1768875782.426:541): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:02.505806 kernel: audit: type=1300 audit(1768875782.426:541): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffece11ad90 a2=0 a3=7ffece11ad7c items=0 ppid=3166 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:02.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:02.580089 kernel: audit: type=1327 audit(1768875782.426:541): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:02.580000 audit[3539]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:02.580000 audit[3539]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffece11ad90 a2=0 a3=0 items=0 ppid=3166 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:02.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:02.615008 kernel: audit: type=1325 audit(1768875782.580:542): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:13.524879 kubelet[3053]: E0120 02:23:13.524758 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:22.019498 kubelet[3053]: E0120 02:23:22.011147 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:22.093617 kubelet[3053]: E0120 02:23:22.086283 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.204s" Jan 20 02:23:25.519930 kubelet[3053]: E0120 02:23:25.519778 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:25.784000 audit[3541]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:25.802167 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:23:25.802368 kernel: audit: type=1325 audit(1768875805.784:543): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:25.886228 kernel: audit: type=1300 audit(1768875805.784:543): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd50ba4ce0 a2=0 a3=7ffd50ba4ccc items=0 ppid=3166 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:25.784000 audit[3541]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd50ba4ce0 a2=0 a3=7ffd50ba4ccc items=0 ppid=3166 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:25.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:25.922642 kernel: audit: type=1327 audit(1768875805.784:543): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:25.924000 audit[3541]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:25.982172 kernel: audit: type=1325 audit(1768875805.924:544): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:25.924000 audit[3541]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd50ba4ce0 a2=0 a3=0 items=0 ppid=3166 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:25.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:26.082128 kernel: audit: type=1300 audit(1768875805.924:544): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd50ba4ce0 a2=0 a3=0 items=0 ppid=3166 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:26.082303 kernel: audit: type=1327 audit(1768875805.924:544): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:26.516302 kubelet[3053]: E0120 02:23:26.515805 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:27.221000 audit[3544]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:27.279879 kernel: audit: type=1325 audit(1768875807.221:545): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:27.288911 kernel: audit: type=1300 audit(1768875807.221:545): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5f7a0d20 a2=0 a3=7fff5f7a0d0c items=0 ppid=3166 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:27.221000 audit[3544]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff5f7a0d20 a2=0 a3=7fff5f7a0d0c items=0 ppid=3166 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:27.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:27.355000 audit[3544]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:27.396631 kernel: audit: type=1327 audit(1768875807.221:545): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:27.397099 kernel: audit: type=1325 audit(1768875807.355:546): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:27.355000 audit[3544]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5f7a0d20 a2=0 a3=0 items=0 ppid=3166 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:27.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:38.685000 audit[3548]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:38.759404 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:23:38.759582 kernel: audit: type=1325 audit(1768875818.685:547): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:38.813058 kernel: audit: type=1300 audit(1768875818.685:547): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffec420dea0 a2=0 a3=7ffec420de8c items=0 ppid=3166 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:38.685000 audit[3548]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffec420dea0 a2=0 a3=7ffec420de8c items=0 ppid=3166 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:38.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:38.887600 kernel: audit: type=1327 audit(1768875818.685:547): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:38.887754 kernel: audit: type=1325 audit(1768875818.808:548): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:38.808000 audit[3548]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:38.808000 audit[3548]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec420dea0 a2=0 a3=0 items=0 ppid=3166 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:38.965483 systemd[1]: Created slice kubepods-besteffort-pod6df09d31_97d8_4ba8_82f8_8d09a8ea0aa0.slice - libcontainer container kubepods-besteffort-pod6df09d31_97d8_4ba8_82f8_8d09a8ea0aa0.slice. Jan 20 02:23:38.980612 kernel: audit: type=1300 audit(1768875818.808:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec420dea0 a2=0 a3=0 items=0 ppid=3166 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:38.980707 kernel: audit: type=1327 audit(1768875818.808:548): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:38.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:38.991463 kubelet[3053]: I0120 02:23:38.990292 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0-tigera-ca-bundle\") pod \"calico-typha-f476db845-mfdzm\" (UID: \"6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0\") " pod="calico-system/calico-typha-f476db845-mfdzm" Jan 20 02:23:38.991463 kubelet[3053]: I0120 02:23:38.990400 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0-typha-certs\") pod \"calico-typha-f476db845-mfdzm\" (UID: \"6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0\") " pod="calico-system/calico-typha-f476db845-mfdzm" Jan 20 02:23:38.991463 kubelet[3053]: I0120 02:23:38.990435 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcttw\" (UniqueName: \"kubernetes.io/projected/6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0-kube-api-access-xcttw\") pod \"calico-typha-f476db845-mfdzm\" (UID: \"6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0\") " pod="calico-system/calico-typha-f476db845-mfdzm" Jan 20 02:23:39.334606 kubelet[3053]: E0120 02:23:39.331032 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:39.338778 containerd[1640]: time="2026-01-20T02:23:39.336234615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f476db845-mfdzm,Uid:6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0,Namespace:calico-system,Attempt:0,}" Jan 20 02:23:39.640679 kubelet[3053]: I0120 02:23:39.639488 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-var-run-calico\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640679 kubelet[3053]: I0120 02:23:39.639610 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31edae0f-8908-401d-8494-0222ecbc76f9-tigera-ca-bundle\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640679 kubelet[3053]: I0120 02:23:39.639651 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-cni-log-dir\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640679 kubelet[3053]: I0120 02:23:39.639678 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-cni-bin-dir\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640679 kubelet[3053]: I0120 02:23:39.639708 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31edae0f-8908-401d-8494-0222ecbc76f9-node-certs\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640988 kubelet[3053]: I0120 02:23:39.639729 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-policysync\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640988 kubelet[3053]: I0120 02:23:39.639754 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-var-lib-calico\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640988 kubelet[3053]: I0120 02:23:39.639779 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-flexvol-driver-host\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640988 kubelet[3053]: I0120 02:23:39.639806 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-lib-modules\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.640988 kubelet[3053]: I0120 02:23:39.639831 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-cni-net-dir\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.641170 kubelet[3053]: I0120 02:23:39.639854 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqchx\" (UniqueName: \"kubernetes.io/projected/31edae0f-8908-401d-8494-0222ecbc76f9-kube-api-access-lqchx\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.641170 kubelet[3053]: I0120 02:23:39.639880 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31edae0f-8908-401d-8494-0222ecbc76f9-xtables-lock\") pod \"calico-node-cwnc4\" (UID: \"31edae0f-8908-401d-8494-0222ecbc76f9\") " pod="calico-system/calico-node-cwnc4" Jan 20 02:23:39.669452 containerd[1640]: time="2026-01-20T02:23:39.667703185Z" level=info msg="connecting to shim 321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113" address="unix:///run/containerd/s/ba5a8ba912b993db95d8f30a04f7f2106a3e44fd8eb524af0182c40943df35b6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:23:39.681473 systemd[1]: Created slice kubepods-besteffort-pod31edae0f_8908_401d_8494_0222ecbc76f9.slice - libcontainer container kubepods-besteffort-pod31edae0f_8908_401d_8494_0222ecbc76f9.slice. Jan 20 02:23:39.803044 kubelet[3053]: E0120 02:23:39.802976 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:39.803044 kubelet[3053]: W0120 02:23:39.803010 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:39.803044 kubelet[3053]: E0120 02:23:39.803043 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:39.979025 kubelet[3053]: E0120 02:23:39.977894 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:39.979025 kubelet[3053]: W0120 02:23:39.977930 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:39.979025 kubelet[3053]: E0120 02:23:39.977958 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:39.981736 kubelet[3053]: E0120 02:23:39.980655 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:39.981736 kubelet[3053]: W0120 02:23:39.980672 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:39.981736 kubelet[3053]: E0120 02:23:39.980691 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.071000 audit[3602]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:40.095575 kubelet[3053]: E0120 02:23:40.094210 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:40.095575 kubelet[3053]: E0120 02:23:40.095122 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:40.110262 kernel: audit: type=1325 audit(1768875820.071:549): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:40.168871 kernel: audit: type=1300 audit(1768875820.071:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc2f9df970 a2=0 a3=7ffc2f9df95c items=0 ppid=3166 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:40.071000 audit[3602]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc2f9df970 a2=0 a3=7ffc2f9df95c items=0 ppid=3166 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:40.169289 containerd[1640]: time="2026-01-20T02:23:40.141910400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cwnc4,Uid:31edae0f-8908-401d-8494-0222ecbc76f9,Namespace:calico-system,Attempt:0,}" Jan 20 02:23:40.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:40.212788 kernel: audit: type=1327 audit(1768875820.071:549): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:40.212940 kernel: audit: type=1325 audit(1768875820.122:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:40.122000 audit[3602]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:23:40.213084 kubelet[3053]: E0120 02:23:40.203918 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.213084 kubelet[3053]: W0120 02:23:40.203949 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.213084 kubelet[3053]: E0120 02:23:40.203976 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.122000 audit[3602]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2f9df970 a2=0 a3=0 items=0 ppid=3166 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:40.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:23:40.227899 kubelet[3053]: E0120 02:23:40.226888 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.227899 kubelet[3053]: W0120 02:23:40.226933 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.227899 kubelet[3053]: E0120 02:23:40.226965 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.244598 kubelet[3053]: E0120 02:23:40.234868 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.244598 kubelet[3053]: W0120 02:23:40.234900 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.244598 kubelet[3053]: E0120 02:23:40.234926 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.244598 kubelet[3053]: E0120 02:23:40.241479 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.244598 kubelet[3053]: W0120 02:23:40.241505 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.244598 kubelet[3053]: E0120 02:23:40.241594 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.256062 kubelet[3053]: E0120 02:23:40.255683 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.256062 kubelet[3053]: W0120 02:23:40.255711 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.256062 kubelet[3053]: E0120 02:23:40.255741 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.256062 kubelet[3053]: I0120 02:23:40.255788 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797382c1-6a9f-48bd-be88-5e85feeef509-kubelet-dir\") pod \"csi-node-driver-9lglv\" (UID: \"797382c1-6a9f-48bd-be88-5e85feeef509\") " pod="calico-system/csi-node-driver-9lglv" Jan 20 02:23:40.259807 kubelet[3053]: E0120 02:23:40.259746 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.259807 kubelet[3053]: W0120 02:23:40.259772 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.259807 kubelet[3053]: E0120 02:23:40.259796 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.271452 kubelet[3053]: E0120 02:23:40.270972 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.271452 kubelet[3053]: W0120 02:23:40.271083 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.271452 kubelet[3053]: E0120 02:23:40.271164 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.276144 kubelet[3053]: E0120 02:23:40.276050 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.281510 kubelet[3053]: W0120 02:23:40.276324 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.283585 kubelet[3053]: E0120 02:23:40.283455 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.294929 kubelet[3053]: E0120 02:23:40.294835 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.295144 kubelet[3053]: W0120 02:23:40.295118 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.295433 kubelet[3053]: E0120 02:23:40.295323 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.309438 kubelet[3053]: E0120 02:23:40.309340 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.310690 kubelet[3053]: W0120 02:23:40.309979 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.310690 kubelet[3053]: E0120 02:23:40.310083 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.311348 kubelet[3053]: E0120 02:23:40.311328 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.311649 kubelet[3053]: W0120 02:23:40.311460 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.311649 kubelet[3053]: E0120 02:23:40.311488 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.320210 kubelet[3053]: E0120 02:23:40.320089 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.323455 kubelet[3053]: W0120 02:23:40.320119 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.323455 kubelet[3053]: E0120 02:23:40.320353 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.325125 kubelet[3053]: E0120 02:23:40.325038 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.325125 kubelet[3053]: W0120 02:23:40.325056 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.325864 kubelet[3053]: E0120 02:23:40.325259 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.326978 kubelet[3053]: E0120 02:23:40.326747 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.326978 kubelet[3053]: W0120 02:23:40.326862 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.326978 kubelet[3053]: E0120 02:23:40.326886 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.345729 kubelet[3053]: E0120 02:23:40.345631 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.346186 kubelet[3053]: W0120 02:23:40.345958 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.346186 kubelet[3053]: E0120 02:23:40.346078 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.359243 kubelet[3053]: E0120 02:23:40.359146 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.359480 kubelet[3053]: W0120 02:23:40.359457 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.359684 kubelet[3053]: E0120 02:23:40.359664 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.374011 kubelet[3053]: E0120 02:23:40.373968 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.374204 kubelet[3053]: W0120 02:23:40.374177 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.374323 kubelet[3053]: E0120 02:23:40.374301 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.379239 kubelet[3053]: E0120 02:23:40.379215 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.379354 kubelet[3053]: W0120 02:23:40.379334 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.379500 kubelet[3053]: E0120 02:23:40.379479 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.380697 kubelet[3053]: E0120 02:23:40.380621 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.380837 kubelet[3053]: W0120 02:23:40.380815 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.381011 kubelet[3053]: E0120 02:23:40.380991 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.403063 kubelet[3053]: E0120 02:23:40.403028 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.403228 kubelet[3053]: W0120 02:23:40.403204 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.403332 kubelet[3053]: E0120 02:23:40.403312 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.429440 kubelet[3053]: E0120 02:23:40.421504 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.429440 kubelet[3053]: W0120 02:23:40.421591 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.429440 kubelet[3053]: E0120 02:23:40.421623 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.430928 containerd[1640]: time="2026-01-20T02:23:40.430727337Z" level=info msg="connecting to shim ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781" address="unix:///run/containerd/s/463fc380af1408ace911e3c035f7b7e73e24b72364a7b26d865255c7456e3318" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:23:40.438634 kubelet[3053]: E0120 02:23:40.437013 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.438634 kubelet[3053]: W0120 02:23:40.437068 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.438634 kubelet[3053]: E0120 02:23:40.437095 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.437945 systemd[1]: Started cri-containerd-321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113.scope - libcontainer container 321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113. Jan 20 02:23:40.450567 kubelet[3053]: E0120 02:23:40.449762 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.450567 kubelet[3053]: W0120 02:23:40.449789 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.450567 kubelet[3053]: E0120 02:23:40.449815 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.458256 kubelet[3053]: E0120 02:23:40.457500 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.458256 kubelet[3053]: W0120 02:23:40.457622 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.458256 kubelet[3053]: E0120 02:23:40.457703 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.458256 kubelet[3053]: I0120 02:23:40.457747 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/797382c1-6a9f-48bd-be88-5e85feeef509-varrun\") pod \"csi-node-driver-9lglv\" (UID: \"797382c1-6a9f-48bd-be88-5e85feeef509\") " pod="calico-system/csi-node-driver-9lglv" Jan 20 02:23:40.458945 kubelet[3053]: E0120 02:23:40.458859 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.459005 kubelet[3053]: W0120 02:23:40.458984 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.459052 kubelet[3053]: E0120 02:23:40.459009 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.459206 kubelet[3053]: I0120 02:23:40.459143 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/797382c1-6a9f-48bd-be88-5e85feeef509-registration-dir\") pod \"csi-node-driver-9lglv\" (UID: \"797382c1-6a9f-48bd-be88-5e85feeef509\") " pod="calico-system/csi-node-driver-9lglv" Jan 20 02:23:40.459866 kubelet[3053]: E0120 02:23:40.459808 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.459866 kubelet[3053]: W0120 02:23:40.459847 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.459866 kubelet[3053]: E0120 02:23:40.459866 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.461505 kubelet[3053]: E0120 02:23:40.460345 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.461705 kubelet[3053]: W0120 02:23:40.461647 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.461705 kubelet[3053]: E0120 02:23:40.461680 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.464847 kubelet[3053]: E0120 02:23:40.464752 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.465190 kubelet[3053]: W0120 02:23:40.465094 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.465190 kubelet[3053]: E0120 02:23:40.465116 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.468250 kubelet[3053]: E0120 02:23:40.468228 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.470817 kubelet[3053]: W0120 02:23:40.470718 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.470817 kubelet[3053]: E0120 02:23:40.470748 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.474968 kubelet[3053]: E0120 02:23:40.474876 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.475219 kubelet[3053]: W0120 02:23:40.475091 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.475219 kubelet[3053]: E0120 02:23:40.475113 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.476595 kubelet[3053]: I0120 02:23:40.476487 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/797382c1-6a9f-48bd-be88-5e85feeef509-socket-dir\") pod \"csi-node-driver-9lglv\" (UID: \"797382c1-6a9f-48bd-be88-5e85feeef509\") " pod="calico-system/csi-node-driver-9lglv" Jan 20 02:23:40.489960 kubelet[3053]: E0120 02:23:40.489785 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.489960 kubelet[3053]: W0120 02:23:40.489894 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.489960 kubelet[3053]: E0120 02:23:40.489924 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.491577 kubelet[3053]: E0120 02:23:40.491025 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.491577 kubelet[3053]: W0120 02:23:40.491125 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.491577 kubelet[3053]: E0120 02:23:40.491148 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.492346 kubelet[3053]: E0120 02:23:40.492233 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.511460 kubelet[3053]: W0120 02:23:40.506007 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.511460 kubelet[3053]: E0120 02:23:40.506233 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.519504 kubelet[3053]: E0120 02:23:40.516757 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.519504 kubelet[3053]: W0120 02:23:40.517082 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.519504 kubelet[3053]: E0120 02:23:40.517430 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.529842 kubelet[3053]: E0120 02:23:40.529716 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.529842 kubelet[3053]: W0120 02:23:40.529743 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.530605 kubelet[3053]: E0120 02:23:40.530466 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.542645 kubelet[3053]: E0120 02:23:40.534862 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.542645 kubelet[3053]: W0120 02:23:40.534885 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.542645 kubelet[3053]: E0120 02:23:40.534908 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.542645 kubelet[3053]: I0120 02:23:40.534944 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhf9n\" (UniqueName: \"kubernetes.io/projected/797382c1-6a9f-48bd-be88-5e85feeef509-kube-api-access-rhf9n\") pod \"csi-node-driver-9lglv\" (UID: \"797382c1-6a9f-48bd-be88-5e85feeef509\") " pod="calico-system/csi-node-driver-9lglv" Jan 20 02:23:40.542645 kubelet[3053]: E0120 02:23:40.535280 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.542645 kubelet[3053]: W0120 02:23:40.535297 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.542645 kubelet[3053]: E0120 02:23:40.535315 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.542645 kubelet[3053]: E0120 02:23:40.535687 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.542645 kubelet[3053]: W0120 02:23:40.535699 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.543935 kubelet[3053]: E0120 02:23:40.535713 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.543935 kubelet[3053]: E0120 02:23:40.536187 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.543935 kubelet[3053]: W0120 02:23:40.536203 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.543935 kubelet[3053]: E0120 02:23:40.536218 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.562694 kubelet[3053]: E0120 02:23:40.558234 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.562694 kubelet[3053]: W0120 02:23:40.558346 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.562694 kubelet[3053]: E0120 02:23:40.558469 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.695607 kubelet[3053]: E0120 02:23:40.695449 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.695607 kubelet[3053]: W0120 02:23:40.695497 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.696073 kubelet[3053]: E0120 02:23:40.696045 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.703053 kubelet[3053]: E0120 02:23:40.702954 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.703703 kubelet[3053]: W0120 02:23:40.703170 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.703703 kubelet[3053]: E0120 02:23:40.703588 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.706930 kubelet[3053]: E0120 02:23:40.706838 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.707403 kubelet[3053]: W0120 02:23:40.707166 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.707403 kubelet[3053]: E0120 02:23:40.707299 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.711679 kubelet[3053]: E0120 02:23:40.711656 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.711855 kubelet[3053]: W0120 02:23:40.711837 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.712226 kubelet[3053]: E0120 02:23:40.712045 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.713590 kubelet[3053]: E0120 02:23:40.713269 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.713840 kubelet[3053]: W0120 02:23:40.713756 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.714444 kubelet[3053]: E0120 02:23:40.714273 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.716662 kubelet[3053]: E0120 02:23:40.716638 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.716793 kubelet[3053]: W0120 02:23:40.716770 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.716912 kubelet[3053]: E0120 02:23:40.716889 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.722505 kubelet[3053]: E0120 02:23:40.719495 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.722505 kubelet[3053]: W0120 02:23:40.719510 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.722505 kubelet[3053]: E0120 02:23:40.719607 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.722505 kubelet[3053]: E0120 02:23:40.720340 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.722505 kubelet[3053]: W0120 02:23:40.720354 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.722505 kubelet[3053]: E0120 02:23:40.720471 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.722505 kubelet[3053]: E0120 02:23:40.721217 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.722505 kubelet[3053]: W0120 02:23:40.721230 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.722505 kubelet[3053]: E0120 02:23:40.721244 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.740419 kubelet[3053]: E0120 02:23:40.739927 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.740419 kubelet[3053]: W0120 02:23:40.739981 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.740419 kubelet[3053]: E0120 02:23:40.740012 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.747643 kubelet[3053]: E0120 02:23:40.747461 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.748328 kubelet[3053]: W0120 02:23:40.747848 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.748328 kubelet[3053]: E0120 02:23:40.747880 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.749584 kubelet[3053]: E0120 02:23:40.749451 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.762320 kubelet[3053]: W0120 02:23:40.749754 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.762320 kubelet[3053]: E0120 02:23:40.749783 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.772486 kubelet[3053]: E0120 02:23:40.772452 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.772675 kubelet[3053]: W0120 02:23:40.772653 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.772780 kubelet[3053]: E0120 02:23:40.772762 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.776454 kubelet[3053]: E0120 02:23:40.773293 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.776454 kubelet[3053]: W0120 02:23:40.773311 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.776454 kubelet[3053]: E0120 02:23:40.773327 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.779906 kubelet[3053]: E0120 02:23:40.779882 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.780093 kubelet[3053]: W0120 02:23:40.779985 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.780093 kubelet[3053]: E0120 02:23:40.780012 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.788766 kubelet[3053]: E0120 02:23:40.788068 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.789152 kubelet[3053]: W0120 02:23:40.789006 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.797170 kubelet[3053]: E0120 02:23:40.789383 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.867997 kubelet[3053]: E0120 02:23:40.867952 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.868191 kubelet[3053]: W0120 02:23:40.868164 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.868307 kubelet[3053]: E0120 02:23:40.868283 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.944843 kubelet[3053]: E0120 02:23:40.944741 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.945299 kubelet[3053]: W0120 02:23:40.944773 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.945299 kubelet[3053]: E0120 02:23:40.945103 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.947075 kubelet[3053]: E0120 02:23:40.946973 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.947075 kubelet[3053]: W0120 02:23:40.946995 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.951973 kubelet[3053]: E0120 02:23:40.947325 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:40.960311 kubelet[3053]: E0120 02:23:40.955338 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:40.960636 kubelet[3053]: W0120 02:23:40.960603 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:40.961251 kubelet[3053]: E0120 02:23:40.960772 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:41.007000 audit: BPF prog-id=154 op=LOAD Jan 20 02:23:41.020000 audit: BPF prog-id=155 op=LOAD Jan 20 02:23:41.020000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.020000 audit: BPF prog-id=155 op=UNLOAD Jan 20 02:23:41.020000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.020000 audit: BPF prog-id=156 op=LOAD Jan 20 02:23:41.020000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.022000 audit: BPF prog-id=157 op=LOAD Jan 20 02:23:41.022000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.022000 audit: BPF prog-id=157 op=UNLOAD Jan 20 02:23:41.022000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.022000 audit: BPF prog-id=156 op=UNLOAD Jan 20 02:23:41.022000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.022000 audit: BPF prog-id=158 op=LOAD Jan 20 02:23:41.022000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3560 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332316633646139313362653734366631346162633132623761373462 Jan 20 02:23:41.146020 kubelet[3053]: E0120 02:23:41.145887 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:41.146020 kubelet[3053]: W0120 02:23:41.145922 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:41.146020 kubelet[3053]: E0120 02:23:41.145956 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:41.218779 systemd[1]: Started cri-containerd-ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781.scope - libcontainer container ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781. Jan 20 02:23:41.383000 audit: BPF prog-id=159 op=LOAD Jan 20 02:23:41.397000 audit: BPF prog-id=160 op=LOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.397000 audit: BPF prog-id=160 op=UNLOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.397000 audit: BPF prog-id=161 op=LOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.397000 audit: BPF prog-id=162 op=LOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.397000 audit: BPF prog-id=162 op=UNLOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.397000 audit: BPF prog-id=161 op=UNLOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.397000 audit: BPF prog-id=163 op=LOAD Jan 20 02:23:41.397000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3643 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:41.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383464653963353165383230393434656133343535333361316133 Jan 20 02:23:41.515472 kubelet[3053]: E0120 02:23:41.514847 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:41.742713 containerd[1640]: time="2026-01-20T02:23:41.742493359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cwnc4,Uid:31edae0f-8908-401d-8494-0222ecbc76f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\"" Jan 20 02:23:41.763926 kubelet[3053]: E0120 02:23:41.744677 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:41.764980 containerd[1640]: time="2026-01-20T02:23:41.764936906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f476db845-mfdzm,Uid:6df09d31-97d8-4ba8-82f8-8d09a8ea0aa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113\"" Jan 20 02:23:41.775075 containerd[1640]: time="2026-01-20T02:23:41.769739714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 02:23:41.801339 kubelet[3053]: E0120 02:23:41.800513 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:23:43.528466 kubelet[3053]: E0120 02:23:43.520461 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:43.757637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268589810.mount: Deactivated successfully. Jan 20 02:23:45.032824 containerd[1640]: time="2026-01-20T02:23:45.028835215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:23:45.040270 containerd[1640]: time="2026-01-20T02:23:45.040074065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=1494738" Jan 20 02:23:45.057755 containerd[1640]: time="2026-01-20T02:23:45.054290772Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:23:45.079444 containerd[1640]: time="2026-01-20T02:23:45.074926253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:23:45.096814 containerd[1640]: time="2026-01-20T02:23:45.092470600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 3.316961937s" Jan 20 02:23:45.096814 containerd[1640]: time="2026-01-20T02:23:45.092638993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 02:23:45.116930 containerd[1640]: time="2026-01-20T02:23:45.115219803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 02:23:45.143579 containerd[1640]: time="2026-01-20T02:23:45.143428238Z" level=info msg="CreateContainer within sandbox \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 02:23:45.308789 containerd[1640]: time="2026-01-20T02:23:45.303255423Z" level=info msg="Container 29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:23:45.321727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342709726.mount: Deactivated successfully. Jan 20 02:23:45.520170 kubelet[3053]: E0120 02:23:45.516323 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:45.522299 containerd[1640]: time="2026-01-20T02:23:45.517733138Z" level=info msg="CreateContainer within sandbox \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e\"" Jan 20 02:23:45.524622 containerd[1640]: time="2026-01-20T02:23:45.523815809Z" level=info msg="StartContainer for \"29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e\"" Jan 20 02:23:45.548033 containerd[1640]: time="2026-01-20T02:23:45.547090655Z" level=info msg="connecting to shim 29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e" address="unix:///run/containerd/s/463fc380af1408ace911e3c035f7b7e73e24b72364a7b26d865255c7456e3318" protocol=ttrpc version=3 Jan 20 02:23:45.966743 systemd[1]: Started cri-containerd-29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e.scope - libcontainer container 29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e. Jan 20 02:23:48.188387 containerd[1640]: time="2026-01-20T02:23:48.163004431Z" level=error msg="get state for 29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e" error="context deadline exceeded" Jan 20 02:23:48.188387 containerd[1640]: time="2026-01-20T02:23:48.175870902Z" level=warning msg="unknown status" status=0 Jan 20 02:23:51.732708 kubelet[3053]: E0120 02:23:51.730615 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.131s" Jan 20 02:23:51.755586 kubelet[3053]: E0120 02:23:51.755486 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:51.799451 kubelet[3053]: E0120 02:23:51.795040 3053 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 02:23:55.269882 kubelet[3053]: E0120 02:23:55.268164 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.537s" Jan 20 02:23:55.297110 containerd[1640]: time="2026-01-20T02:23:55.293073499Z" level=error msg="get state for 29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e" error="context deadline exceeded" Jan 20 02:23:55.297110 containerd[1640]: time="2026-01-20T02:23:55.293161182Z" level=warning msg="unknown status" status=0 Jan 20 02:23:55.699726 kubelet[3053]: E0120 02:23:55.694224 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:55.894212 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 20 02:23:55.894435 kernel: audit: type=1334 audit(1768875835.872:567): prog-id=164 op=LOAD Jan 20 02:23:55.872000 audit: BPF prog-id=164 op=LOAD Jan 20 02:23:55.872000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:55.950292 kernel: audit: type=1300 audit(1768875835.872:567): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:55.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:55.953436 kernel: audit: type=1327 audit(1768875835.872:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:56.117441 kernel: audit: type=1334 audit(1768875835.894:568): prog-id=165 op=LOAD Jan 20 02:23:56.684681 kernel: audit: type=1300 audit(1768875835.894:568): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:56.778043 kernel: audit: type=1327 audit(1768875835.894:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:56.779053 kernel: audit: type=1334 audit(1768875835.894:569): prog-id=165 op=UNLOAD Jan 20 02:23:56.779115 kernel: audit: type=1300 audit(1768875835.894:569): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:56.779150 kernel: audit: type=1327 audit(1768875835.894:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:56.797966 kernel: audit: type=1334 audit(1768875835.894:570): prog-id=164 op=UNLOAD Jan 20 02:23:55.894000 audit: BPF prog-id=165 op=LOAD Jan 20 02:23:55.894000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:55.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:55.894000 audit: BPF prog-id=165 op=UNLOAD Jan 20 02:23:55.894000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:55.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:55.894000 audit: BPF prog-id=164 op=UNLOAD Jan 20 02:23:55.894000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:55.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:55.894000 audit: BPF prog-id=166 op=LOAD Jan 20 02:23:55.894000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3643 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:23:55.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239646362353861363239656231306234356462386361623636326434 Jan 20 02:23:57.186689 containerd[1640]: time="2026-01-20T02:23:57.186603542Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:23:57.187963 containerd[1640]: time="2026-01-20T02:23:57.187595662Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 20 02:23:57.212238 kubelet[3053]: E0120 02:23:57.212031 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:23:58.809964 kubelet[3053]: E0120 02:23:58.800003 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:23:58.939191 kubelet[3053]: E0120 02:23:58.938863 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:58.939191 kubelet[3053]: W0120 02:23:58.938951 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:58.939191 kubelet[3053]: E0120 02:23:58.938997 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:58.939191 kubelet[3053]: E0120 02:23:58.940156 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:58.939191 kubelet[3053]: W0120 02:23:58.940174 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:58.939191 kubelet[3053]: E0120 02:23:58.940194 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:58.939191 kubelet[3053]: E0120 02:23:58.945212 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:58.939191 kubelet[3053]: W0120 02:23:58.945228 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:58.939191 kubelet[3053]: E0120 02:23:58.945249 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.306077 kubelet[3053]: E0120 02:23:58.996198 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.306077 kubelet[3053]: W0120 02:23:58.996231 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.306077 kubelet[3053]: E0120 02:23:58.996260 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.483492 kubelet[3053]: E0120 02:23:59.436506 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.483492 kubelet[3053]: W0120 02:23:59.436791 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.483492 kubelet[3053]: E0120 02:23:59.437074 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.593046 kubelet[3053]: E0120 02:23:59.585890 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.611743 kubelet[3053]: W0120 02:23:59.604755 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.611743 kubelet[3053]: E0120 02:23:59.604847 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.611743 kubelet[3053]: E0120 02:23:59.608669 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.611743 kubelet[3053]: W0120 02:23:59.608691 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.611743 kubelet[3053]: E0120 02:23:59.608717 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.626111 kubelet[3053]: E0120 02:23:59.625807 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.626111 kubelet[3053]: W0120 02:23:59.625850 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.626111 kubelet[3053]: E0120 02:23:59.625885 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.629647 kubelet[3053]: E0120 02:23:59.629621 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.629769 kubelet[3053]: W0120 02:23:59.629749 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.630030 kubelet[3053]: E0120 02:23:59.629902 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.630654 kubelet[3053]: E0120 02:23:59.630635 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.631099 kubelet[3053]: W0120 02:23:59.630738 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.631099 kubelet[3053]: E0120 02:23:59.630756 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.634970 kubelet[3053]: E0120 02:23:59.634864 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.635373 kubelet[3053]: W0120 02:23:59.634897 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.635373 kubelet[3053]: E0120 02:23:59.635096 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.638830 kubelet[3053]: E0120 02:23:59.638799 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.638972 kubelet[3053]: W0120 02:23:59.638950 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.639075 kubelet[3053]: E0120 02:23:59.639050 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.700695 kubelet[3053]: E0120 02:23:59.700500 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.701345 kubelet[3053]: W0120 02:23:59.700775 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.701345 kubelet[3053]: E0120 02:23:59.700902 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.711067 kubelet[3053]: E0120 02:23:59.704924 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.711067 kubelet[3053]: W0120 02:23:59.704949 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.711067 kubelet[3053]: E0120 02:23:59.705047 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.726976 kubelet[3053]: E0120 02:23:59.722112 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.726976 kubelet[3053]: W0120 02:23:59.722143 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.726976 kubelet[3053]: E0120 02:23:59.722172 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.727604 kubelet[3053]: E0120 02:23:59.727409 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 02:23:59.727604 kubelet[3053]: W0120 02:23:59.727456 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 02:23:59.727604 kubelet[3053]: E0120 02:23:59.727491 3053 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 02:23:59.980658 containerd[1640]: time="2026-01-20T02:23:59.972352730Z" level=info msg="StartContainer for \"29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e\" returns successfully" Jan 20 02:24:00.587596 kubelet[3053]: E0120 02:24:00.587381 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:00.783881 systemd[1]: cri-containerd-29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e.scope: Deactivated successfully. Jan 20 02:24:00.808716 systemd[1]: cri-containerd-29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e.scope: Consumed 308ms CPU time, 6.9M memory peak, 1.8M read from disk. Jan 20 02:24:00.817000 audit: BPF prog-id=166 op=UNLOAD Jan 20 02:24:01.055946 kubelet[3053]: E0120 02:24:00.965811 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:01.302201 containerd[1640]: time="2026-01-20T02:24:01.283844988Z" level=info msg="received container exit event container_id:\"29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e\" id:\"29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e\" pid:3754 exited_at:{seconds:1768875841 nanos:250974875}" Jan 20 02:24:02.354902 kubelet[3053]: E0120 02:24:02.354832 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:02.479016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e-rootfs.mount: Deactivated successfully. Jan 20 02:24:02.562495 kubelet[3053]: E0120 02:24:02.533247 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:03.422466 kubelet[3053]: E0120 02:24:03.421482 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:04.590768 kubelet[3053]: E0120 02:24:04.515234 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:06.518606 kubelet[3053]: E0120 02:24:06.517054 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:07.375064 kubelet[3053]: E0120 02:24:07.368357 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:08.521036 kubelet[3053]: E0120 02:24:08.520935 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:10.525606 kubelet[3053]: E0120 02:24:10.523725 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:12.360357 containerd[1640]: time="2026-01-20T02:24:12.356634022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:24:12.375268 containerd[1640]: time="2026-01-20T02:24:12.368507374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736633" Jan 20 02:24:12.384351 containerd[1640]: time="2026-01-20T02:24:12.379155343Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:24:12.398417 kubelet[3053]: E0120 02:24:12.387956 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:12.415663 containerd[1640]: time="2026-01-20T02:24:12.405658728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:24:12.415663 containerd[1640]: time="2026-01-20T02:24:12.406808779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 27.291510351s" Jan 20 02:24:12.415663 containerd[1640]: time="2026-01-20T02:24:12.406850716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 02:24:12.417821 containerd[1640]: time="2026-01-20T02:24:12.417779898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 02:24:12.520758 kubelet[3053]: E0120 02:24:12.514481 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:12.554433 containerd[1640]: time="2026-01-20T02:24:12.550668295Z" level=info msg="CreateContainer within sandbox \"321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 02:24:12.657452 containerd[1640]: time="2026-01-20T02:24:12.655941114Z" level=info msg="Container 10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:24:12.735479 containerd[1640]: time="2026-01-20T02:24:12.731704613Z" level=info msg="CreateContainer within sandbox \"321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005\"" Jan 20 02:24:12.747373 containerd[1640]: time="2026-01-20T02:24:12.744475277Z" level=info msg="StartContainer for \"10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005\"" Jan 20 02:24:12.758274 containerd[1640]: time="2026-01-20T02:24:12.754599497Z" level=info msg="connecting to shim 10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005" address="unix:///run/containerd/s/ba5a8ba912b993db95d8f30a04f7f2106a3e44fd8eb524af0182c40943df35b6" protocol=ttrpc version=3 Jan 20 02:24:14.168010 systemd[1]: Started cri-containerd-10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005.scope - libcontainer container 10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005. Jan 20 02:24:14.333947 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 02:24:14.334715 kernel: audit: type=1334 audit(1768875854.309:573): prog-id=167 op=LOAD Jan 20 02:24:14.309000 audit: BPF prog-id=167 op=LOAD Jan 20 02:24:14.316000 audit: BPF prog-id=168 op=LOAD Jan 20 02:24:14.364971 kernel: audit: type=1334 audit(1768875854.316:574): prog-id=168 op=LOAD Jan 20 02:24:14.372471 kernel: audit: type=1300 audit(1768875854.316:574): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.316000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.432070 kernel: audit: type=1327 audit(1768875854.316:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.456699 kernel: audit: type=1334 audit(1768875854.316:575): prog-id=168 op=UNLOAD Jan 20 02:24:14.459745 kernel: audit: type=1300 audit(1768875854.316:575): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.316000 audit: BPF prog-id=168 op=UNLOAD Jan 20 02:24:14.316000 audit[3819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.523946 kernel: audit: type=1327 audit(1768875854.316:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.577611 kernel: audit: type=1334 audit(1768875854.318:576): prog-id=169 op=LOAD Jan 20 02:24:15.012895 kernel: audit: type=1300 audit(1768875854.318:576): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:15.608991 kernel: audit: type=1327 audit(1768875854.318:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.318000 audit: BPF prog-id=169 op=LOAD Jan 20 02:24:14.318000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.319000 audit: BPF prog-id=170 op=LOAD Jan 20 02:24:14.319000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.319000 audit: BPF prog-id=170 op=UNLOAD Jan 20 02:24:14.319000 audit[3819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.319000 audit: BPF prog-id=169 op=UNLOAD Jan 20 02:24:14.319000 audit[3819]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:14.322000 audit: BPF prog-id=171 op=LOAD Jan 20 02:24:14.322000 audit[3819]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3560 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:14.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130616465643430353631396666326436373536333730313839343230 Jan 20 02:24:16.569687 containerd[1640]: time="2026-01-20T02:24:16.561940432Z" level=error msg="get state for 10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005" error="context deadline exceeded" Jan 20 02:24:16.569687 containerd[1640]: time="2026-01-20T02:24:16.562016754Z" level=warning msg="unknown status" status=0 Jan 20 02:24:16.807937 kubelet[3053]: E0120 02:24:16.807854 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.205s" Jan 20 02:24:17.012975 kubelet[3053]: E0120 02:24:16.992100 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:17.409053 kubelet[3053]: E0120 02:24:17.407844 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:17.832101 containerd[1640]: time="2026-01-20T02:24:17.829445419Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:24:18.415754 containerd[1640]: time="2026-01-20T02:24:18.412359762Z" level=info msg="StartContainer for \"10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005\" returns successfully" Jan 20 02:24:18.711013 kubelet[3053]: E0120 02:24:18.700103 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:19.067484 kubelet[3053]: E0120 02:24:19.064958 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:19.602737 kubelet[3053]: I0120 02:24:19.584137 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f476db845-mfdzm" podStartSLOduration=10.976726751 podStartE2EDuration="41.584115447s" podCreationTimestamp="2026-01-20 02:23:38 +0000 UTC" firstStartedPulling="2026-01-20 02:23:41.8099753 +0000 UTC m=+112.708617510" lastFinishedPulling="2026-01-20 02:24:12.417364007 +0000 UTC m=+143.316006206" observedRunningTime="2026-01-20 02:24:19.521498956 +0000 UTC m=+150.420141165" watchObservedRunningTime="2026-01-20 02:24:19.584115447 +0000 UTC m=+150.482757656" Jan 20 02:24:20.207967 kubelet[3053]: E0120 02:24:20.207393 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:20.533405 kubelet[3053]: E0120 02:24:20.529104 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:21.190596 kubelet[3053]: E0120 02:24:21.188864 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:22.430415 kubelet[3053]: E0120 02:24:22.421756 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:22.524795 kubelet[3053]: E0120 02:24:22.523390 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:24.521038 kubelet[3053]: E0120 02:24:24.518855 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:26.581160 kubelet[3053]: E0120 02:24:26.534703 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:27.425923 kubelet[3053]: E0120 02:24:27.425844 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:27.514015 kubelet[3053]: E0120 02:24:27.513488 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:28.522212 kubelet[3053]: E0120 02:24:28.522116 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:29.697593 kubelet[3053]: E0120 02:24:29.696328 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:31.527872 kubelet[3053]: E0120 02:24:31.526629 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:32.435609 kubelet[3053]: E0120 02:24:32.435478 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:33.534018 kubelet[3053]: E0120 02:24:33.525951 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:35.513906 kubelet[3053]: E0120 02:24:35.513839 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:37.463099 kubelet[3053]: E0120 02:24:37.462491 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:37.551191 kubelet[3053]: E0120 02:24:37.533854 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:38.787009 containerd[1640]: time="2026-01-20T02:24:38.782659239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:24:38.787009 containerd[1640]: time="2026-01-20T02:24:38.784762806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445002" Jan 20 02:24:38.793020 containerd[1640]: time="2026-01-20T02:24:38.792033239Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:24:38.838685 containerd[1640]: time="2026-01-20T02:24:38.836431536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:24:38.838685 containerd[1640]: time="2026-01-20T02:24:38.837455372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 26.418943849s" Jan 20 02:24:38.878791 containerd[1640]: time="2026-01-20T02:24:38.878611574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 02:24:38.917343 containerd[1640]: time="2026-01-20T02:24:38.916809558Z" level=info msg="CreateContainer within sandbox \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 02:24:39.208427 containerd[1640]: time="2026-01-20T02:24:39.205852220Z" level=info msg="Container 7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:24:39.368877 containerd[1640]: time="2026-01-20T02:24:39.365044797Z" level=info msg="CreateContainer within sandbox \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f\"" Jan 20 02:24:39.381948 containerd[1640]: time="2026-01-20T02:24:39.373486142Z" level=info msg="StartContainer for \"7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f\"" Jan 20 02:24:39.407656 containerd[1640]: time="2026-01-20T02:24:39.401736736Z" level=info msg="connecting to shim 7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f" address="unix:///run/containerd/s/463fc380af1408ace911e3c035f7b7e73e24b72364a7b26d865255c7456e3318" protocol=ttrpc version=3 Jan 20 02:24:39.521797 kubelet[3053]: E0120 02:24:39.519753 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:39.682876 systemd[1]: Started cri-containerd-7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f.scope - libcontainer container 7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f. Jan 20 02:24:40.192385 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 02:24:40.193775 kernel: audit: type=1334 audit(1768875880.178:581): prog-id=172 op=LOAD Jan 20 02:24:40.178000 audit: BPF prog-id=172 op=LOAD Jan 20 02:24:40.178000 audit[3867]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.297103 kernel: audit: type=1300 audit(1768875880.178:581): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.369863 kernel: audit: type=1327 audit(1768875880.178:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.180000 audit: BPF prog-id=173 op=LOAD Jan 20 02:24:40.180000 audit[3867]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.480157 kernel: audit: type=1334 audit(1768875880.180:582): prog-id=173 op=LOAD Jan 20 02:24:40.480345 kernel: audit: type=1300 audit(1768875880.180:582): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.483833 kernel: audit: type=1327 audit(1768875880.180:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.555697 kernel: audit: type=1334 audit(1768875880.180:583): prog-id=173 op=UNLOAD Jan 20 02:24:40.180000 audit: BPF prog-id=173 op=UNLOAD Jan 20 02:24:40.562693 kernel: audit: type=1300 audit(1768875880.180:583): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.180000 audit[3867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.606672 kernel: audit: type=1327 audit(1768875880.180:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.629723 kernel: audit: type=1334 audit(1768875880.180:584): prog-id=172 op=UNLOAD Jan 20 02:24:40.180000 audit: BPF prog-id=172 op=UNLOAD Jan 20 02:24:40.180000 audit[3867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.180000 audit: BPF prog-id=174 op=LOAD Jan 20 02:24:40.180000 audit[3867]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3643 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:40.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730313964383764333437316533393531646538373361633139356565 Jan 20 02:24:40.933813 containerd[1640]: time="2026-01-20T02:24:40.931164129Z" level=info msg="StartContainer for \"7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f\" returns successfully" Jan 20 02:24:41.522220 kubelet[3053]: E0120 02:24:41.522138 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:41.865054 kubelet[3053]: E0120 02:24:41.863475 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:42.464819 kubelet[3053]: E0120 02:24:42.464709 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:43.517591 kubelet[3053]: E0120 02:24:43.516191 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:45.515041 kubelet[3053]: E0120 02:24:45.514950 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:47.470948 kubelet[3053]: E0120 02:24:47.470885 3053 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 02:24:47.520860 kubelet[3053]: E0120 02:24:47.520738 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:48.528837 kubelet[3053]: E0120 02:24:48.513722 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:49.515618 kubelet[3053]: E0120 02:24:49.514326 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:50.546007 kubelet[3053]: E0120 02:24:50.540775 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:50.691926 systemd[1]: cri-containerd-7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f.scope: Deactivated successfully. Jan 20 02:24:50.692479 systemd[1]: cri-containerd-7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f.scope: Consumed 2.541s CPU time, 175.1M memory peak, 3M read from disk, 171.3M written to disk. Jan 20 02:24:50.766011 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 02:24:50.766198 kernel: audit: type=1334 audit(1768875890.728:586): prog-id=174 op=UNLOAD Jan 20 02:24:50.728000 audit: BPF prog-id=174 op=UNLOAD Jan 20 02:24:50.766391 containerd[1640]: time="2026-01-20T02:24:50.764401225Z" level=info msg="received container exit event container_id:\"7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f\" id:\"7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f\" pid:3882 exited_at:{seconds:1768875890 nanos:750039684}" Jan 20 02:24:51.235676 kubelet[3053]: E0120 02:24:51.223181 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:51.338423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f-rootfs.mount: Deactivated successfully. Jan 20 02:24:51.543112 kubelet[3053]: E0120 02:24:51.524705 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:51.543112 kubelet[3053]: E0120 02:24:51.529006 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:51.758000 audit[3915]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:24:51.812660 kernel: audit: type=1325 audit(1768875891.758:587): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:24:51.812819 kernel: audit: type=1300 audit(1768875891.758:587): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd20cf50b0 a2=0 a3=7ffd20cf509c items=0 ppid=3166 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:51.758000 audit[3915]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd20cf50b0 a2=0 a3=7ffd20cf509c items=0 ppid=3166 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:51.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:24:51.887199 kernel: audit: type=1327 audit(1768875891.758:587): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:24:51.887396 kernel: audit: type=1325 audit(1768875891.813:588): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:24:51.813000 audit[3915]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:24:51.923608 kernel: audit: type=1300 audit(1768875891.813:588): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd20cf50b0 a2=0 a3=7ffd20cf509c items=0 ppid=3166 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:51.813000 audit[3915]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd20cf50b0 a2=0 a3=7ffd20cf509c items=0 ppid=3166 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:24:51.931506 kernel: audit: type=1327 audit(1768875891.813:588): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:24:51.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:24:52.204655 kubelet[3053]: E0120 02:24:52.199494 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:52.221338 containerd[1640]: time="2026-01-20T02:24:52.213577577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 02:24:53.617927 systemd[1]: Created slice kubepods-besteffort-pod797382c1_6a9f_48bd_be88_5e85feeef509.slice - libcontainer container kubepods-besteffort-pod797382c1_6a9f_48bd_be88_5e85feeef509.slice. Jan 20 02:24:53.734890 containerd[1640]: time="2026-01-20T02:24:53.734825602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:24:55.116221 containerd[1640]: time="2026-01-20T02:24:55.115836827Z" level=error msg="Failed to destroy network for sandbox \"02d8a92046d30bfb29437eb2ba36ce84a3f56c971697a385d0a56b258d0d960c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:24:55.137885 systemd[1]: run-netns-cni\x2dcc1894f9\x2ddb9d\x2d0413\x2d9c8c\x2d94af3bc65541.mount: Deactivated successfully. Jan 20 02:24:55.174747 containerd[1640]: time="2026-01-20T02:24:55.174423285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d8a92046d30bfb29437eb2ba36ce84a3f56c971697a385d0a56b258d0d960c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:24:55.176311 kubelet[3053]: E0120 02:24:55.175590 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d8a92046d30bfb29437eb2ba36ce84a3f56c971697a385d0a56b258d0d960c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:24:55.176311 kubelet[3053]: E0120 02:24:55.175678 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d8a92046d30bfb29437eb2ba36ce84a3f56c971697a385d0a56b258d0d960c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:24:55.176311 kubelet[3053]: E0120 02:24:55.175733 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d8a92046d30bfb29437eb2ba36ce84a3f56c971697a385d0a56b258d0d960c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:24:55.176961 kubelet[3053]: E0120 02:24:55.175809 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02d8a92046d30bfb29437eb2ba36ce84a3f56c971697a385d0a56b258d0d960c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:24:58.060924 kubelet[3053]: I0120 02:24:58.057423 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e572f9c2-ce5a-4d3c-956a-a140a15040fb-tigera-ca-bundle\") pod \"calico-kube-controllers-746557d8fc-ztfh7\" (UID: \"e572f9c2-ce5a-4d3c-956a-a140a15040fb\") " pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:24:58.060924 kubelet[3053]: I0120 02:24:58.057594 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5kj6\" (UniqueName: \"kubernetes.io/projected/e572f9c2-ce5a-4d3c-956a-a140a15040fb-kube-api-access-j5kj6\") pod \"calico-kube-controllers-746557d8fc-ztfh7\" (UID: \"e572f9c2-ce5a-4d3c-956a-a140a15040fb\") " pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:24:58.060924 kubelet[3053]: I0120 02:24:58.057661 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d7d1be6-3f16-4e7b-a636-54909266045f-config-volume\") pod \"coredns-66bc5c9577-m48wx\" (UID: \"7d7d1be6-3f16-4e7b-a636-54909266045f\") " pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:24:58.060924 kubelet[3053]: I0120 02:24:58.057691 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcls\" (UniqueName: \"kubernetes.io/projected/7d7d1be6-3f16-4e7b-a636-54909266045f-kube-api-access-shcls\") pod \"coredns-66bc5c9577-m48wx\" (UID: \"7d7d1be6-3f16-4e7b-a636-54909266045f\") " pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:24:58.062717 systemd[1]: Created slice kubepods-burstable-pod7d7d1be6_3f16_4e7b_a636_54909266045f.slice - libcontainer container kubepods-burstable-pod7d7d1be6_3f16_4e7b_a636_54909266045f.slice. Jan 20 02:24:58.170942 kubelet[3053]: I0120 02:24:58.169630 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hdl\" (UniqueName: \"kubernetes.io/projected/bc4468de-9eba-48b5-88c5-c38dc3f08d39-kube-api-access-x5hdl\") pod \"coredns-66bc5c9577-4cpfh\" (UID: \"bc4468de-9eba-48b5-88c5-c38dc3f08d39\") " pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:24:58.170942 kubelet[3053]: I0120 02:24:58.169853 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4dk9\" (UniqueName: \"kubernetes.io/projected/4d193768-31ad-4962-ae34-80e85c7499df-kube-api-access-v4dk9\") pod \"calico-apiserver-764db5c9d9-v64bv\" (UID: \"4d193768-31ad-4962-ae34-80e85c7499df\") " pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:24:58.170942 kubelet[3053]: I0120 02:24:58.169967 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-backend-key-pair\") pod \"whisker-bd7f95659-84svg\" (UID: \"4693d45f-0393-4dfc-8971-a75e80c17ac2\") " pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:24:58.171287 kubelet[3053]: I0120 02:24:58.171203 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4892884d-a213-4dd6-ab53-844c331ae6d1-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-tfwc7\" (UID: \"4892884d-a213-4dd6-ab53-844c331ae6d1\") " pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:24:58.171343 kubelet[3053]: I0120 02:24:58.171331 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg6km\" (UniqueName: \"kubernetes.io/projected/4693d45f-0393-4dfc-8971-a75e80c17ac2-kube-api-access-mg6km\") pod \"whisker-bd7f95659-84svg\" (UID: \"4693d45f-0393-4dfc-8971-a75e80c17ac2\") " pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:24:58.183832 kubelet[3053]: I0120 02:24:58.171448 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca9f2980-346b-4927-8985-9cb6081e02db-calico-apiserver-certs\") pod \"calico-apiserver-764db5c9d9-r829f\" (UID: \"ca9f2980-346b-4927-8985-9cb6081e02db\") " pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:24:58.183832 kubelet[3053]: I0120 02:24:58.171741 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4892884d-a213-4dd6-ab53-844c331ae6d1-goldmane-key-pair\") pod \"goldmane-7c778bb748-tfwc7\" (UID: \"4892884d-a213-4dd6-ab53-844c331ae6d1\") " pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:24:58.183832 kubelet[3053]: I0120 02:24:58.171862 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d193768-31ad-4962-ae34-80e85c7499df-calico-apiserver-certs\") pod \"calico-apiserver-764db5c9d9-v64bv\" (UID: \"4d193768-31ad-4962-ae34-80e85c7499df\") " pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:24:58.183832 kubelet[3053]: I0120 02:24:58.171992 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc4468de-9eba-48b5-88c5-c38dc3f08d39-config-volume\") pod \"coredns-66bc5c9577-4cpfh\" (UID: \"bc4468de-9eba-48b5-88c5-c38dc3f08d39\") " pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:24:58.183832 kubelet[3053]: I0120 02:24:58.172133 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bfd\" (UniqueName: \"kubernetes.io/projected/ca9f2980-346b-4927-8985-9cb6081e02db-kube-api-access-s9bfd\") pod \"calico-apiserver-764db5c9d9-r829f\" (UID: \"ca9f2980-346b-4927-8985-9cb6081e02db\") " pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:24:58.189976 kubelet[3053]: I0120 02:24:58.172259 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4892884d-a213-4dd6-ab53-844c331ae6d1-config\") pod \"goldmane-7c778bb748-tfwc7\" (UID: \"4892884d-a213-4dd6-ab53-844c331ae6d1\") " pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:24:58.189976 kubelet[3053]: I0120 02:24:58.172381 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgkj\" (UniqueName: \"kubernetes.io/projected/4892884d-a213-4dd6-ab53-844c331ae6d1-kube-api-access-jjgkj\") pod \"goldmane-7c778bb748-tfwc7\" (UID: \"4892884d-a213-4dd6-ab53-844c331ae6d1\") " pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:24:58.189976 kubelet[3053]: I0120 02:24:58.172478 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-ca-bundle\") pod \"whisker-bd7f95659-84svg\" (UID: \"4693d45f-0393-4dfc-8971-a75e80c17ac2\") " pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:24:58.199476 systemd[1]: Created slice kubepods-besteffort-pode572f9c2_ce5a_4d3c_956a_a140a15040fb.slice - libcontainer container kubepods-besteffort-pode572f9c2_ce5a_4d3c_956a_a140a15040fb.slice. Jan 20 02:24:58.522795 systemd[1]: Created slice kubepods-besteffort-pod4892884d_a213_4dd6_ab53_844c331ae6d1.slice - libcontainer container kubepods-besteffort-pod4892884d_a213_4dd6_ab53_844c331ae6d1.slice. Jan 20 02:24:58.740431 systemd[1]: Created slice kubepods-besteffort-pod4693d45f_0393_4dfc_8971_a75e80c17ac2.slice - libcontainer container kubepods-besteffort-pod4693d45f_0393_4dfc_8971_a75e80c17ac2.slice. Jan 20 02:24:58.803353 containerd[1640]: time="2026-01-20T02:24:58.803250329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,}" Jan 20 02:24:58.873199 containerd[1640]: time="2026-01-20T02:24:58.849487619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,}" Jan 20 02:24:58.873199 containerd[1640]: time="2026-01-20T02:24:58.852250651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,}" Jan 20 02:24:58.866226 systemd[1]: Created slice kubepods-besteffort-pod4d193768_31ad_4962_ae34_80e85c7499df.slice - libcontainer container kubepods-besteffort-pod4d193768_31ad_4962_ae34_80e85c7499df.slice. Jan 20 02:24:58.900029 kubelet[3053]: E0120 02:24:58.898710 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:58.906759 containerd[1640]: time="2026-01-20T02:24:58.906708875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,}" Jan 20 02:24:58.974032 containerd[1640]: time="2026-01-20T02:24:58.973453703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:24:58.975331 systemd[1]: Created slice kubepods-burstable-podbc4468de_9eba_48b5_88c5_c38dc3f08d39.slice - libcontainer container kubepods-burstable-podbc4468de_9eba_48b5_88c5_c38dc3f08d39.slice. Jan 20 02:24:59.046765 systemd[1]: Created slice kubepods-besteffort-podca9f2980_346b_4927_8985_9cb6081e02db.slice - libcontainer container kubepods-besteffort-podca9f2980_346b_4927_8985_9cb6081e02db.slice. Jan 20 02:24:59.063497 kubelet[3053]: E0120 02:24:59.063217 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:24:59.086880 containerd[1640]: time="2026-01-20T02:24:59.086666902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,}" Jan 20 02:24:59.180297 containerd[1640]: time="2026-01-20T02:24:59.180182665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:00.783880 containerd[1640]: time="2026-01-20T02:25:00.775017034Z" level=error msg="Failed to destroy network for sandbox \"1824a18e4d75283cd480e11f47d52dcd690b2fda35ba2a16d2b31bc064a645ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:00.839016 systemd[1]: run-netns-cni\x2df6d507dc\x2d37c0\x2d1d92\x2daf39\x2de34df301afd8.mount: Deactivated successfully. Jan 20 02:25:00.958737 containerd[1640]: time="2026-01-20T02:25:00.945313077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824a18e4d75283cd480e11f47d52dcd690b2fda35ba2a16d2b31bc064a645ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:00.981479 kubelet[3053]: E0120 02:25:00.957044 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824a18e4d75283cd480e11f47d52dcd690b2fda35ba2a16d2b31bc064a645ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:00.981479 kubelet[3053]: E0120 02:25:00.957176 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824a18e4d75283cd480e11f47d52dcd690b2fda35ba2a16d2b31bc064a645ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:00.981479 kubelet[3053]: E0120 02:25:00.957236 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1824a18e4d75283cd480e11f47d52dcd690b2fda35ba2a16d2b31bc064a645ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:01.004777 kubelet[3053]: E0120 02:25:00.957306 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1824a18e4d75283cd480e11f47d52dcd690b2fda35ba2a16d2b31bc064a645ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:25:01.075457 containerd[1640]: time="2026-01-20T02:25:01.072040036Z" level=error msg="Failed to destroy network for sandbox \"54f3cfc2aa5b1d4d187005918e1d10b06c69d608690068b89b87dfcfcde69a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:01.195177 systemd[1]: run-netns-cni\x2dd1315f20\x2d5f8a\x2d50c3\x2d9511\x2d02ba1b8c082d.mount: Deactivated successfully. Jan 20 02:25:01.418887 containerd[1640]: time="2026-01-20T02:25:01.404748526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f3cfc2aa5b1d4d187005918e1d10b06c69d608690068b89b87dfcfcde69a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:01.419113 kubelet[3053]: E0120 02:25:01.405055 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f3cfc2aa5b1d4d187005918e1d10b06c69d608690068b89b87dfcfcde69a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:01.419883 kubelet[3053]: E0120 02:25:01.419802 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f3cfc2aa5b1d4d187005918e1d10b06c69d608690068b89b87dfcfcde69a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:01.420049 kubelet[3053]: E0120 02:25:01.420016 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f3cfc2aa5b1d4d187005918e1d10b06c69d608690068b89b87dfcfcde69a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:01.429738 kubelet[3053]: E0120 02:25:01.429435 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54f3cfc2aa5b1d4d187005918e1d10b06c69d608690068b89b87dfcfcde69a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:25:01.445462 containerd[1640]: time="2026-01-20T02:25:01.439870574Z" level=error msg="Failed to destroy network for sandbox \"19981a504176ff60998f2c8c9f0489a84c3915633464ca3448eca203a86a3f57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:01.522790 systemd[1]: run-netns-cni\x2d9527acbe\x2dc1e6\x2d19de\x2de3cb\x2d0efc6ccf9b46.mount: Deactivated successfully. Jan 20 02:25:01.766015 containerd[1640]: time="2026-01-20T02:25:01.755990558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19981a504176ff60998f2c8c9f0489a84c3915633464ca3448eca203a86a3f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:01.775440 kubelet[3053]: E0120 02:25:01.775382 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19981a504176ff60998f2c8c9f0489a84c3915633464ca3448eca203a86a3f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:01.790057 kubelet[3053]: E0120 02:25:01.789419 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19981a504176ff60998f2c8c9f0489a84c3915633464ca3448eca203a86a3f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:01.790057 kubelet[3053]: E0120 02:25:01.789474 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19981a504176ff60998f2c8c9f0489a84c3915633464ca3448eca203a86a3f57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:01.790057 kubelet[3053]: E0120 02:25:01.789627 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19981a504176ff60998f2c8c9f0489a84c3915633464ca3448eca203a86a3f57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd7f95659-84svg" podUID="4693d45f-0393-4dfc-8971-a75e80c17ac2" Jan 20 02:25:02.427814 containerd[1640]: time="2026-01-20T02:25:02.427478759Z" level=error msg="Failed to destroy network for sandbox \"173cef722a82fe017a229532301ac76d1be588b963f87565769c855a04b63130\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.462998 systemd[1]: run-netns-cni\x2d6b0fc3fb\x2d04ed\x2d0f4d\x2d5f4e\x2d739ace0d440e.mount: Deactivated successfully. Jan 20 02:25:02.521993 containerd[1640]: time="2026-01-20T02:25:02.521888065Z" level=error msg="Failed to destroy network for sandbox \"c660b8814fbd78772f5b1c1487e8428d59928c193cd97df35c03a6a34898bb68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.548797 containerd[1640]: time="2026-01-20T02:25:02.548627715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"173cef722a82fe017a229532301ac76d1be588b963f87565769c855a04b63130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.578491 systemd[1]: run-netns-cni\x2dcc32a09a\x2d5a27\x2d857d\x2d24ee\x2d17452b622712.mount: Deactivated successfully. Jan 20 02:25:02.590934 kubelet[3053]: E0120 02:25:02.590845 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173cef722a82fe017a229532301ac76d1be588b963f87565769c855a04b63130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.615750 kubelet[3053]: E0120 02:25:02.590945 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173cef722a82fe017a229532301ac76d1be588b963f87565769c855a04b63130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:02.615750 kubelet[3053]: E0120 02:25:02.590979 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173cef722a82fe017a229532301ac76d1be588b963f87565769c855a04b63130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:02.615750 kubelet[3053]: E0120 02:25:02.591049 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"173cef722a82fe017a229532301ac76d1be588b963f87565769c855a04b63130\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4cpfh" podUID="bc4468de-9eba-48b5-88c5-c38dc3f08d39" Jan 20 02:25:02.657503 containerd[1640]: time="2026-01-20T02:25:02.656302450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c660b8814fbd78772f5b1c1487e8428d59928c193cd97df35c03a6a34898bb68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.672870 kubelet[3053]: E0120 02:25:02.672812 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c660b8814fbd78772f5b1c1487e8428d59928c193cd97df35c03a6a34898bb68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.673848 kubelet[3053]: E0120 02:25:02.673341 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c660b8814fbd78772f5b1c1487e8428d59928c193cd97df35c03a6a34898bb68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:02.673848 kubelet[3053]: E0120 02:25:02.673386 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c660b8814fbd78772f5b1c1487e8428d59928c193cd97df35c03a6a34898bb68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:02.673848 kubelet[3053]: E0120 02:25:02.673461 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c660b8814fbd78772f5b1c1487e8428d59928c193cd97df35c03a6a34898bb68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m48wx" podUID="7d7d1be6-3f16-4e7b-a636-54909266045f" Jan 20 02:25:02.730681 containerd[1640]: time="2026-01-20T02:25:02.730364368Z" level=error msg="Failed to destroy network for sandbox \"0944059b4d24d7abe4f58fb1516644b42092a2a78a8ecfb0e6fdb960279b53f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.742033 systemd[1]: run-netns-cni\x2d315c8928\x2df39a\x2d0e0f\x2dbeb7\x2d3364ac852f3c.mount: Deactivated successfully. Jan 20 02:25:02.749761 containerd[1640]: time="2026-01-20T02:25:02.749698434Z" level=error msg="Failed to destroy network for sandbox \"42714e88f88acbc2286c47ba495cd7d66e963e906199c0323210f2bc18f9ce76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.766315 containerd[1640]: time="2026-01-20T02:25:02.763622842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944059b4d24d7abe4f58fb1516644b42092a2a78a8ecfb0e6fdb960279b53f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.771502 kubelet[3053]: E0120 02:25:02.764263 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944059b4d24d7abe4f58fb1516644b42092a2a78a8ecfb0e6fdb960279b53f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.781285 kubelet[3053]: E0120 02:25:02.773362 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944059b4d24d7abe4f58fb1516644b42092a2a78a8ecfb0e6fdb960279b53f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:02.781285 kubelet[3053]: E0120 02:25:02.773439 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0944059b4d24d7abe4f58fb1516644b42092a2a78a8ecfb0e6fdb960279b53f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:02.781285 kubelet[3053]: E0120 02:25:02.773580 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0944059b4d24d7abe4f58fb1516644b42092a2a78a8ecfb0e6fdb960279b53f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:25:02.800874 systemd[1]: run-netns-cni\x2d353aef0a\x2da7e6\x2dc2fc\x2d5883\x2df1bbe46c8d3f.mount: Deactivated successfully. Jan 20 02:25:02.901813 containerd[1640]: time="2026-01-20T02:25:02.899273480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42714e88f88acbc2286c47ba495cd7d66e963e906199c0323210f2bc18f9ce76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.907649 kubelet[3053]: E0120 02:25:02.906728 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42714e88f88acbc2286c47ba495cd7d66e963e906199c0323210f2bc18f9ce76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:02.907649 kubelet[3053]: E0120 02:25:02.906841 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42714e88f88acbc2286c47ba495cd7d66e963e906199c0323210f2bc18f9ce76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:02.907649 kubelet[3053]: E0120 02:25:02.906871 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42714e88f88acbc2286c47ba495cd7d66e963e906199c0323210f2bc18f9ce76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:02.907929 kubelet[3053]: E0120 02:25:02.906950 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42714e88f88acbc2286c47ba495cd7d66e963e906199c0323210f2bc18f9ce76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:25:09.553047 containerd[1640]: time="2026-01-20T02:25:09.539899033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:10.756681 containerd[1640]: time="2026-01-20T02:25:10.754067870Z" level=error msg="Failed to destroy network for sandbox \"8475d205e4a1b80072e923fb4e62489f1c0bf3481c797fbb9d5214f12c9fd71d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:10.789266 systemd[1]: run-netns-cni\x2dde8da4a9\x2d834a\x2d4d53\x2df4a6\x2d9c667da4b9e5.mount: Deactivated successfully. Jan 20 02:25:10.828101 containerd[1640]: time="2026-01-20T02:25:10.827812030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8475d205e4a1b80072e923fb4e62489f1c0bf3481c797fbb9d5214f12c9fd71d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:10.836317 kubelet[3053]: E0120 02:25:10.832299 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8475d205e4a1b80072e923fb4e62489f1c0bf3481c797fbb9d5214f12c9fd71d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:10.836317 kubelet[3053]: E0120 02:25:10.835893 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8475d205e4a1b80072e923fb4e62489f1c0bf3481c797fbb9d5214f12c9fd71d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:10.836317 kubelet[3053]: E0120 02:25:10.835938 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8475d205e4a1b80072e923fb4e62489f1c0bf3481c797fbb9d5214f12c9fd71d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:10.837217 kubelet[3053]: E0120 02:25:10.836010 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8475d205e4a1b80072e923fb4e62489f1c0bf3481c797fbb9d5214f12c9fd71d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:25:13.542926 containerd[1640]: time="2026-01-20T02:25:13.535934969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:14.353314 containerd[1640]: time="2026-01-20T02:25:14.353240338Z" level=error msg="Failed to destroy network for sandbox \"0c75ea128bcc4d6886764bea6e232079f73d6c83c4ac4e32c27e6c47976e29a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:14.372202 systemd[1]: run-netns-cni\x2d3138d227\x2d25d5\x2d12df\x2ddf7b\x2db3cdf8c246f0.mount: Deactivated successfully. Jan 20 02:25:14.405982 containerd[1640]: time="2026-01-20T02:25:14.405353684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75ea128bcc4d6886764bea6e232079f73d6c83c4ac4e32c27e6c47976e29a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:14.411868 kubelet[3053]: E0120 02:25:14.411801 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75ea128bcc4d6886764bea6e232079f73d6c83c4ac4e32c27e6c47976e29a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:14.412838 kubelet[3053]: E0120 02:25:14.412799 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75ea128bcc4d6886764bea6e232079f73d6c83c4ac4e32c27e6c47976e29a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:14.413007 kubelet[3053]: E0120 02:25:14.412981 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c75ea128bcc4d6886764bea6e232079f73d6c83c4ac4e32c27e6c47976e29a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:14.413586 kubelet[3053]: E0120 02:25:14.413487 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c75ea128bcc4d6886764bea6e232079f73d6c83c4ac4e32c27e6c47976e29a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:25:14.568929 kubelet[3053]: E0120 02:25:14.568775 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:14.580585 containerd[1640]: time="2026-01-20T02:25:14.576145045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:14.580585 containerd[1640]: time="2026-01-20T02:25:14.576778528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:15.369223 containerd[1640]: time="2026-01-20T02:25:15.369111945Z" level=error msg="Failed to destroy network for sandbox \"a1d5a3bede8177ec8796ac3010e0f14cbfcb0862665bee05d485a9337b259481\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:15.385696 systemd[1]: run-netns-cni\x2d42aa4f39\x2d7539\x2da41b\x2d679e\x2dbdc181ef034a.mount: Deactivated successfully. Jan 20 02:25:15.408998 containerd[1640]: time="2026-01-20T02:25:15.408269889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d5a3bede8177ec8796ac3010e0f14cbfcb0862665bee05d485a9337b259481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:15.412869 kubelet[3053]: E0120 02:25:15.412749 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d5a3bede8177ec8796ac3010e0f14cbfcb0862665bee05d485a9337b259481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:15.412869 kubelet[3053]: E0120 02:25:15.412833 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d5a3bede8177ec8796ac3010e0f14cbfcb0862665bee05d485a9337b259481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:15.419005 kubelet[3053]: E0120 02:25:15.412891 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d5a3bede8177ec8796ac3010e0f14cbfcb0862665bee05d485a9337b259481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:15.419005 kubelet[3053]: E0120 02:25:15.412962 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1d5a3bede8177ec8796ac3010e0f14cbfcb0862665bee05d485a9337b259481\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:25:15.516613 containerd[1640]: time="2026-01-20T02:25:15.513985393Z" level=error msg="Failed to destroy network for sandbox \"d51cc8a945ef6c96840ebd122fd04fb7b8c75f6bfadaae4930f08f3f63b527d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:15.524405 systemd[1]: run-netns-cni\x2da8143fa3\x2d399c\x2d8f5a\x2d38c1\x2d34f00c11b297.mount: Deactivated successfully. Jan 20 02:25:15.585589 containerd[1640]: time="2026-01-20T02:25:15.582674090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51cc8a945ef6c96840ebd122fd04fb7b8c75f6bfadaae4930f08f3f63b527d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:15.587045 kubelet[3053]: E0120 02:25:15.586588 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51cc8a945ef6c96840ebd122fd04fb7b8c75f6bfadaae4930f08f3f63b527d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:15.587045 kubelet[3053]: E0120 02:25:15.586665 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51cc8a945ef6c96840ebd122fd04fb7b8c75f6bfadaae4930f08f3f63b527d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:15.587045 kubelet[3053]: E0120 02:25:15.586697 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d51cc8a945ef6c96840ebd122fd04fb7b8c75f6bfadaae4930f08f3f63b527d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:15.587282 kubelet[3053]: E0120 02:25:15.586758 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d51cc8a945ef6c96840ebd122fd04fb7b8c75f6bfadaae4930f08f3f63b527d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4cpfh" podUID="bc4468de-9eba-48b5-88c5-c38dc3f08d39" Jan 20 02:25:16.595641 containerd[1640]: time="2026-01-20T02:25:16.592826776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:16.641031 containerd[1640]: time="2026-01-20T02:25:16.640289475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:17.606613 kubelet[3053]: E0120 02:25:17.599229 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:17.618096 containerd[1640]: time="2026-01-20T02:25:17.616135308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:17.632731 containerd[1640]: time="2026-01-20T02:25:17.631963174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:17.646253 containerd[1640]: time="2026-01-20T02:25:17.646054782Z" level=error msg="Failed to destroy network for sandbox \"c00cf150b9600a5f2104357eddfe8b81db9143c3f261fb1de4d28934c1555ee0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.661721 systemd[1]: run-netns-cni\x2d08f27415\x2d1811\x2d1e38\x2d0e23\x2dbf1a22c7b8fb.mount: Deactivated successfully. Jan 20 02:25:17.674067 containerd[1640]: time="2026-01-20T02:25:17.673864698Z" level=error msg="Failed to destroy network for sandbox \"5e9d0a1690d3e1c7d77275369269e9f4953291be60e05c4f5e240c50db7ce848\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.693867 systemd[1]: run-netns-cni\x2d615ca2d6\x2de4f7\x2d2ec2\x2da93b\x2dc66b80504693.mount: Deactivated successfully. Jan 20 02:25:17.696303 containerd[1640]: time="2026-01-20T02:25:17.695426226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00cf150b9600a5f2104357eddfe8b81db9143c3f261fb1de4d28934c1555ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.697400 containerd[1640]: time="2026-01-20T02:25:17.697358183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9d0a1690d3e1c7d77275369269e9f4953291be60e05c4f5e240c50db7ce848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.698376 kubelet[3053]: E0120 02:25:17.698329 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00cf150b9600a5f2104357eddfe8b81db9143c3f261fb1de4d28934c1555ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.699601 kubelet[3053]: E0120 02:25:17.699074 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00cf150b9600a5f2104357eddfe8b81db9143c3f261fb1de4d28934c1555ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:17.699601 kubelet[3053]: E0120 02:25:17.699115 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00cf150b9600a5f2104357eddfe8b81db9143c3f261fb1de4d28934c1555ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:17.699601 kubelet[3053]: E0120 02:25:17.698611 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9d0a1690d3e1c7d77275369269e9f4953291be60e05c4f5e240c50db7ce848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.699957 kubelet[3053]: E0120 02:25:17.699250 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c00cf150b9600a5f2104357eddfe8b81db9143c3f261fb1de4d28934c1555ee0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:25:17.699957 kubelet[3053]: E0120 02:25:17.699619 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9d0a1690d3e1c7d77275369269e9f4953291be60e05c4f5e240c50db7ce848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:17.699957 kubelet[3053]: E0120 02:25:17.699848 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e9d0a1690d3e1c7d77275369269e9f4953291be60e05c4f5e240c50db7ce848\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:17.700191 kubelet[3053]: E0120 02:25:17.699912 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e9d0a1690d3e1c7d77275369269e9f4953291be60e05c4f5e240c50db7ce848\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd7f95659-84svg" podUID="4693d45f-0393-4dfc-8971-a75e80c17ac2" Jan 20 02:25:17.985440 containerd[1640]: time="2026-01-20T02:25:17.979293744Z" level=error msg="Failed to destroy network for sandbox \"9e1a679418ce17851f81b28e8970c289505a19828c23a1d981743c365b3ba2ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.995930 systemd[1]: run-netns-cni\x2d582db2e2\x2d518f\x2dace2\x2d51cc\x2d4741931b09b6.mount: Deactivated successfully. Jan 20 02:25:17.999650 containerd[1640]: time="2026-01-20T02:25:17.998354083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1a679418ce17851f81b28e8970c289505a19828c23a1d981743c365b3ba2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.999907 kubelet[3053]: E0120 02:25:17.998970 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1a679418ce17851f81b28e8970c289505a19828c23a1d981743c365b3ba2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:17.999907 kubelet[3053]: E0120 02:25:17.999063 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1a679418ce17851f81b28e8970c289505a19828c23a1d981743c365b3ba2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:17.999907 kubelet[3053]: E0120 02:25:17.999094 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1a679418ce17851f81b28e8970c289505a19828c23a1d981743c365b3ba2ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:18.000048 kubelet[3053]: E0120 02:25:17.999183 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e1a679418ce17851f81b28e8970c289505a19828c23a1d981743c365b3ba2ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:25:18.080967 containerd[1640]: time="2026-01-20T02:25:18.080728863Z" level=error msg="Failed to destroy network for sandbox \"9ffac6a08192e306f92a1671615945453be57c63219b806eb71bd674e3a86205\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:18.091737 systemd[1]: run-netns-cni\x2d587a99c2\x2de0de\x2d5cab\x2d0775\x2d76cdb21b1d8c.mount: Deactivated successfully. Jan 20 02:25:18.100443 containerd[1640]: time="2026-01-20T02:25:18.097432965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffac6a08192e306f92a1671615945453be57c63219b806eb71bd674e3a86205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:18.100808 kubelet[3053]: E0120 02:25:18.097904 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffac6a08192e306f92a1671615945453be57c63219b806eb71bd674e3a86205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:18.100808 kubelet[3053]: E0120 02:25:18.097972 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffac6a08192e306f92a1671615945453be57c63219b806eb71bd674e3a86205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:18.100808 kubelet[3053]: E0120 02:25:18.098001 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffac6a08192e306f92a1671615945453be57c63219b806eb71bd674e3a86205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:18.101006 kubelet[3053]: E0120 02:25:18.098059 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ffac6a08192e306f92a1671615945453be57c63219b806eb71bd674e3a86205\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m48wx" podUID="7d7d1be6-3f16-4e7b-a636-54909266045f" Jan 20 02:25:22.767456 containerd[1640]: time="2026-01-20T02:25:22.761757840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:23.963826 containerd[1640]: time="2026-01-20T02:25:23.963354689Z" level=error msg="Failed to destroy network for sandbox \"230dd8bec1e0282bcbdf69a68b28ac63ac17467f23052c2b8f39e2a0570bac53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:23.984168 systemd[1]: run-netns-cni\x2d1387224b\x2d268a\x2d91c5\x2d6b58\x2d0ed2de508aac.mount: Deactivated successfully. Jan 20 02:25:23.997424 containerd[1640]: time="2026-01-20T02:25:23.996636613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"230dd8bec1e0282bcbdf69a68b28ac63ac17467f23052c2b8f39e2a0570bac53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:23.997721 kubelet[3053]: E0120 02:25:23.996987 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230dd8bec1e0282bcbdf69a68b28ac63ac17467f23052c2b8f39e2a0570bac53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:23.997721 kubelet[3053]: E0120 02:25:23.997066 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230dd8bec1e0282bcbdf69a68b28ac63ac17467f23052c2b8f39e2a0570bac53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:23.997721 kubelet[3053]: E0120 02:25:23.997100 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230dd8bec1e0282bcbdf69a68b28ac63ac17467f23052c2b8f39e2a0570bac53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:23.998325 kubelet[3053]: E0120 02:25:23.997172 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"230dd8bec1e0282bcbdf69a68b28ac63ac17467f23052c2b8f39e2a0570bac53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:25:26.662356 containerd[1640]: time="2026-01-20T02:25:26.658803610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:27.638504 kubelet[3053]: E0120 02:25:27.631639 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:27.659351 containerd[1640]: time="2026-01-20T02:25:27.647435019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:27.715342 containerd[1640]: time="2026-01-20T02:25:27.687617841Z" level=error msg="Failed to destroy network for sandbox \"2c9fb5389f13f49cee96fe7e25559f7657e4c70d5be95be49658176bdf86f2fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:27.715342 containerd[1640]: time="2026-01-20T02:25:27.694461622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:27.711090 systemd[1]: run-netns-cni\x2d6c2784fc\x2d9817\x2d7cfb\x2ddf48\x2dad45c7026629.mount: Deactivated successfully. Jan 20 02:25:27.971459 containerd[1640]: time="2026-01-20T02:25:27.963467095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9fb5389f13f49cee96fe7e25559f7657e4c70d5be95be49658176bdf86f2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:27.973292 kubelet[3053]: E0120 02:25:27.973241 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9fb5389f13f49cee96fe7e25559f7657e4c70d5be95be49658176bdf86f2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:27.986358 kubelet[3053]: E0120 02:25:27.980189 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9fb5389f13f49cee96fe7e25559f7657e4c70d5be95be49658176bdf86f2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:27.986358 kubelet[3053]: E0120 02:25:27.980248 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c9fb5389f13f49cee96fe7e25559f7657e4c70d5be95be49658176bdf86f2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:27.986358 kubelet[3053]: E0120 02:25:27.980336 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c9fb5389f13f49cee96fe7e25559f7657e4c70d5be95be49658176bdf86f2fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:25:28.926885 containerd[1640]: time="2026-01-20T02:25:28.923822525Z" level=error msg="Failed to destroy network for sandbox \"468e9a820a3bab518d58d97e2fd8d08c6323a107eee9b4fba02b2a17f7292606\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:28.973656 systemd[1]: run-netns-cni\x2dc51d0cf6\x2d329d\x2dd417\x2d4845\x2d5e6a0148b19a.mount: Deactivated successfully. Jan 20 02:25:29.040440 containerd[1640]: time="2026-01-20T02:25:29.040297067Z" level=error msg="Failed to destroy network for sandbox \"85869079e4d3ca2830a79efa8d88bd2dcb47832e7747a32a596296c8c7b8374f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:29.066190 systemd[1]: run-netns-cni\x2dfb12aedd\x2d39ac\x2d1a64\x2d44af\x2d36a1d25d30c7.mount: Deactivated successfully. Jan 20 02:25:29.072063 containerd[1640]: time="2026-01-20T02:25:29.069995290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"468e9a820a3bab518d58d97e2fd8d08c6323a107eee9b4fba02b2a17f7292606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:29.072249 kubelet[3053]: E0120 02:25:29.070655 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468e9a820a3bab518d58d97e2fd8d08c6323a107eee9b4fba02b2a17f7292606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:29.077364 kubelet[3053]: E0120 02:25:29.073224 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468e9a820a3bab518d58d97e2fd8d08c6323a107eee9b4fba02b2a17f7292606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:29.077364 kubelet[3053]: E0120 02:25:29.073341 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468e9a820a3bab518d58d97e2fd8d08c6323a107eee9b4fba02b2a17f7292606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:29.077364 kubelet[3053]: E0120 02:25:29.076087 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"468e9a820a3bab518d58d97e2fd8d08c6323a107eee9b4fba02b2a17f7292606\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4cpfh" podUID="bc4468de-9eba-48b5-88c5-c38dc3f08d39" Jan 20 02:25:29.132582 containerd[1640]: time="2026-01-20T02:25:29.132298672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85869079e4d3ca2830a79efa8d88bd2dcb47832e7747a32a596296c8c7b8374f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:29.133277 kubelet[3053]: E0120 02:25:29.133224 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85869079e4d3ca2830a79efa8d88bd2dcb47832e7747a32a596296c8c7b8374f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:29.133961 kubelet[3053]: E0120 02:25:29.133925 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85869079e4d3ca2830a79efa8d88bd2dcb47832e7747a32a596296c8c7b8374f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:29.134296 kubelet[3053]: E0120 02:25:29.134210 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85869079e4d3ca2830a79efa8d88bd2dcb47832e7747a32a596296c8c7b8374f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:29.134707 kubelet[3053]: E0120 02:25:29.134601 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85869079e4d3ca2830a79efa8d88bd2dcb47832e7747a32a596296c8c7b8374f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:25:29.567068 kubelet[3053]: E0120 02:25:29.563966 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:29.567264 containerd[1640]: time="2026-01-20T02:25:29.564956171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:30.368461 containerd[1640]: time="2026-01-20T02:25:30.355932575Z" level=error msg="Failed to destroy network for sandbox \"9cf9609d1a5d78dc1d91d6d7896e820a04e69f61eb8aa9a040a5dbea2203d627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:30.366847 systemd[1]: run-netns-cni\x2d4c62faa3\x2d4f7c\x2dac82\x2d805c\x2ddaffd9fb5b00.mount: Deactivated successfully. Jan 20 02:25:30.390854 containerd[1640]: time="2026-01-20T02:25:30.388848405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9609d1a5d78dc1d91d6d7896e820a04e69f61eb8aa9a040a5dbea2203d627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:30.391122 kubelet[3053]: E0120 02:25:30.389909 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9609d1a5d78dc1d91d6d7896e820a04e69f61eb8aa9a040a5dbea2203d627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:30.391122 kubelet[3053]: E0120 02:25:30.390001 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9609d1a5d78dc1d91d6d7896e820a04e69f61eb8aa9a040a5dbea2203d627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:30.391122 kubelet[3053]: E0120 02:25:30.390033 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cf9609d1a5d78dc1d91d6d7896e820a04e69f61eb8aa9a040a5dbea2203d627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:30.393716 kubelet[3053]: E0120 02:25:30.390118 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cf9609d1a5d78dc1d91d6d7896e820a04e69f61eb8aa9a040a5dbea2203d627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m48wx" podUID="7d7d1be6-3f16-4e7b-a636-54909266045f" Jan 20 02:25:30.548952 containerd[1640]: time="2026-01-20T02:25:30.547359930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:31.229951 containerd[1640]: time="2026-01-20T02:25:31.226411473Z" level=error msg="Failed to destroy network for sandbox \"0d12bb46646f4436102b99cc4f6c13ef2ec5db108fc380283cb7156a6cbbbdb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:31.240230 systemd[1]: run-netns-cni\x2dcd447d8f\x2d4a1f\x2d8002\x2d9d4f\x2d3ab667f581b9.mount: Deactivated successfully. Jan 20 02:25:31.333560 containerd[1640]: time="2026-01-20T02:25:31.332994896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d12bb46646f4436102b99cc4f6c13ef2ec5db108fc380283cb7156a6cbbbdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:31.342123 kubelet[3053]: E0120 02:25:31.336751 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d12bb46646f4436102b99cc4f6c13ef2ec5db108fc380283cb7156a6cbbbdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:31.342123 kubelet[3053]: E0120 02:25:31.336875 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d12bb46646f4436102b99cc4f6c13ef2ec5db108fc380283cb7156a6cbbbdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:31.342123 kubelet[3053]: E0120 02:25:31.336905 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d12bb46646f4436102b99cc4f6c13ef2ec5db108fc380283cb7156a6cbbbdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:31.342504 kubelet[3053]: E0120 02:25:31.336968 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d12bb46646f4436102b99cc4f6c13ef2ec5db108fc380283cb7156a6cbbbdb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:25:32.619750 containerd[1640]: time="2026-01-20T02:25:32.613601979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:32.631997 containerd[1640]: time="2026-01-20T02:25:32.628066448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:32.891583 containerd[1640]: time="2026-01-20T02:25:32.891336364Z" level=error msg="Failed to destroy network for sandbox \"f0f3b74cc332fc5f29fa81382454a473f8f0adc9a3c38897f733fbbdbce98d9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:32.908413 systemd[1]: run-netns-cni\x2d06133c60\x2dc9c2\x2de088\x2de107\x2df88b6f4db150.mount: Deactivated successfully. Jan 20 02:25:32.925881 containerd[1640]: time="2026-01-20T02:25:32.924017906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3b74cc332fc5f29fa81382454a473f8f0adc9a3c38897f733fbbdbce98d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:32.926348 kubelet[3053]: E0120 02:25:32.926140 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3b74cc332fc5f29fa81382454a473f8f0adc9a3c38897f733fbbdbce98d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:32.926348 kubelet[3053]: E0120 02:25:32.926302 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3b74cc332fc5f29fa81382454a473f8f0adc9a3c38897f733fbbdbce98d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:32.926348 kubelet[3053]: E0120 02:25:32.926333 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f3b74cc332fc5f29fa81382454a473f8f0adc9a3c38897f733fbbdbce98d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:32.927216 kubelet[3053]: E0120 02:25:32.926410 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f3b74cc332fc5f29fa81382454a473f8f0adc9a3c38897f733fbbdbce98d9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:25:32.966853 containerd[1640]: time="2026-01-20T02:25:32.966722508Z" level=error msg="Failed to destroy network for sandbox \"3065d28dec1d0991a6aad5b74cd4c23893ee0b0a1eaea1710407bb265f3f0339\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:32.974680 systemd[1]: run-netns-cni\x2dc69f987e\x2d3fba\x2d2af8\x2d4d01\x2dacc9a8020723.mount: Deactivated successfully. Jan 20 02:25:32.986767 containerd[1640]: time="2026-01-20T02:25:32.983921987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3065d28dec1d0991a6aad5b74cd4c23893ee0b0a1eaea1710407bb265f3f0339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:32.987040 kubelet[3053]: E0120 02:25:32.984465 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3065d28dec1d0991a6aad5b74cd4c23893ee0b0a1eaea1710407bb265f3f0339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:32.987040 kubelet[3053]: E0120 02:25:32.984718 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3065d28dec1d0991a6aad5b74cd4c23893ee0b0a1eaea1710407bb265f3f0339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:32.987040 kubelet[3053]: E0120 02:25:32.984757 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3065d28dec1d0991a6aad5b74cd4c23893ee0b0a1eaea1710407bb265f3f0339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:32.987241 kubelet[3053]: E0120 02:25:32.985479 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3065d28dec1d0991a6aad5b74cd4c23893ee0b0a1eaea1710407bb265f3f0339\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd7f95659-84svg" podUID="4693d45f-0393-4dfc-8971-a75e80c17ac2" Jan 20 02:25:35.750471 containerd[1640]: time="2026-01-20T02:25:35.750184733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:36.333110 containerd[1640]: time="2026-01-20T02:25:36.324173338Z" level=error msg="Failed to destroy network for sandbox \"6e384120d2e633e6bbca56359eeb7342c7df3c0cb9fcd68a3effcc8d91c301f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:36.327461 systemd[1]: run-netns-cni\x2d5150c00d\x2dfaea\x2d977c\x2de73f\x2d31b8a62f1dc1.mount: Deactivated successfully. Jan 20 02:25:36.362748 containerd[1640]: time="2026-01-20T02:25:36.362676366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e384120d2e633e6bbca56359eeb7342c7df3c0cb9fcd68a3effcc8d91c301f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:36.371241 kubelet[3053]: E0120 02:25:36.367464 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e384120d2e633e6bbca56359eeb7342c7df3c0cb9fcd68a3effcc8d91c301f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:36.371241 kubelet[3053]: E0120 02:25:36.367709 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e384120d2e633e6bbca56359eeb7342c7df3c0cb9fcd68a3effcc8d91c301f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:36.371241 kubelet[3053]: E0120 02:25:36.367810 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e384120d2e633e6bbca56359eeb7342c7df3c0cb9fcd68a3effcc8d91c301f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:36.372128 kubelet[3053]: E0120 02:25:36.367978 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e384120d2e633e6bbca56359eeb7342c7df3c0cb9fcd68a3effcc8d91c301f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:25:41.607469 containerd[1640]: time="2026-01-20T02:25:41.599580648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:41.623614 containerd[1640]: time="2026-01-20T02:25:41.623100723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:41.635463 kubelet[3053]: E0120 02:25:41.635413 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:41.650428 containerd[1640]: time="2026-01-20T02:25:41.639616433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:41.654386 kubelet[3053]: E0120 02:25:41.654138 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:41.655068 containerd[1640]: time="2026-01-20T02:25:41.654967815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:42.718922 containerd[1640]: time="2026-01-20T02:25:42.718717480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:43.233657 containerd[1640]: time="2026-01-20T02:25:43.230363683Z" level=error msg="Failed to destroy network for sandbox \"5a8645ff2cef543826afa27d0ba285c7a07ba49157f2993c89600d6a013821a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.240505 systemd[1]: run-netns-cni\x2dc337547f\x2d7bc9\x2d33d5\x2d1686\x2d2e7c39f52271.mount: Deactivated successfully. Jan 20 02:25:43.304797 containerd[1640]: time="2026-01-20T02:25:43.300610136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a8645ff2cef543826afa27d0ba285c7a07ba49157f2993c89600d6a013821a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.310341 kubelet[3053]: E0120 02:25:43.303346 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a8645ff2cef543826afa27d0ba285c7a07ba49157f2993c89600d6a013821a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.310341 kubelet[3053]: E0120 02:25:43.303588 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a8645ff2cef543826afa27d0ba285c7a07ba49157f2993c89600d6a013821a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:43.310341 kubelet[3053]: E0120 02:25:43.303887 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a8645ff2cef543826afa27d0ba285c7a07ba49157f2993c89600d6a013821a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:43.406102 kubelet[3053]: E0120 02:25:43.304132 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a8645ff2cef543826afa27d0ba285c7a07ba49157f2993c89600d6a013821a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4cpfh" podUID="bc4468de-9eba-48b5-88c5-c38dc3f08d39" Jan 20 02:25:43.406102 kubelet[3053]: E0120 02:25:43.405107 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f4a2c6977a8b2855ba5ec15d84126cc61bcf7f6062ccfba8ce57f0fa30fbad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.388865 systemd[1]: run-netns-cni\x2dd0dfc7f2\x2d752f\x2d8b66\x2d5720\x2d50406ac02d2e.mount: Deactivated successfully. Jan 20 02:25:43.406416 containerd[1640]: time="2026-01-20T02:25:43.329415899Z" level=error msg="Failed to destroy network for sandbox \"54f4a2c6977a8b2855ba5ec15d84126cc61bcf7f6062ccfba8ce57f0fa30fbad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.406416 containerd[1640]: time="2026-01-20T02:25:43.401710904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f4a2c6977a8b2855ba5ec15d84126cc61bcf7f6062ccfba8ce57f0fa30fbad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.411189 kubelet[3053]: E0120 02:25:43.408132 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f4a2c6977a8b2855ba5ec15d84126cc61bcf7f6062ccfba8ce57f0fa30fbad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:43.411189 kubelet[3053]: E0120 02:25:43.408195 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f4a2c6977a8b2855ba5ec15d84126cc61bcf7f6062ccfba8ce57f0fa30fbad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:43.411189 kubelet[3053]: E0120 02:25:43.408269 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54f4a2c6977a8b2855ba5ec15d84126cc61bcf7f6062ccfba8ce57f0fa30fbad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:25:43.440062 containerd[1640]: time="2026-01-20T02:25:43.439613800Z" level=error msg="Failed to destroy network for sandbox \"19860b7b583d87a670dd79377deef72ef25a709d0f459b0079f80cbbe04c7a66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.454429 containerd[1640]: time="2026-01-20T02:25:43.453762849Z" level=error msg="Failed to destroy network for sandbox \"c8592851a459dac53d7acd3a053fe7f289a61d01890013efffff6cc95a280ad4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.466900 systemd[1]: run-netns-cni\x2dbafd37f6\x2dcdc6\x2d0a78\x2d9b9e\x2db1b037639215.mount: Deactivated successfully. Jan 20 02:25:43.474739 containerd[1640]: time="2026-01-20T02:25:43.473103258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19860b7b583d87a670dd79377deef72ef25a709d0f459b0079f80cbbe04c7a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.474947 kubelet[3053]: E0120 02:25:43.473984 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19860b7b583d87a670dd79377deef72ef25a709d0f459b0079f80cbbe04c7a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.474947 kubelet[3053]: E0120 02:25:43.474093 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19860b7b583d87a670dd79377deef72ef25a709d0f459b0079f80cbbe04c7a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:43.474947 kubelet[3053]: E0120 02:25:43.474123 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19860b7b583d87a670dd79377deef72ef25a709d0f459b0079f80cbbe04c7a66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:43.475156 kubelet[3053]: E0120 02:25:43.474227 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19860b7b583d87a670dd79377deef72ef25a709d0f459b0079f80cbbe04c7a66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:25:43.517382 systemd[1]: run-netns-cni\x2d0ff558d8\x2d0bc6\x2decc4\x2d2911\x2da8f25cd07617.mount: Deactivated successfully. Jan 20 02:25:43.544591 containerd[1640]: time="2026-01-20T02:25:43.540995898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8592851a459dac53d7acd3a053fe7f289a61d01890013efffff6cc95a280ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.556768 kubelet[3053]: E0120 02:25:43.554105 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8592851a459dac53d7acd3a053fe7f289a61d01890013efffff6cc95a280ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:43.571222 kubelet[3053]: E0120 02:25:43.569495 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8592851a459dac53d7acd3a053fe7f289a61d01890013efffff6cc95a280ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:43.571222 kubelet[3053]: E0120 02:25:43.570125 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8592851a459dac53d7acd3a053fe7f289a61d01890013efffff6cc95a280ad4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:43.571222 kubelet[3053]: E0120 02:25:43.570203 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8592851a459dac53d7acd3a053fe7f289a61d01890013efffff6cc95a280ad4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m48wx" podUID="7d7d1be6-3f16-4e7b-a636-54909266045f" Jan 20 02:25:44.002149 containerd[1640]: time="2026-01-20T02:25:43.990405698Z" level=error msg="Failed to destroy network for sandbox \"34841658946524a26bf7ec5410fe5c3eac7481451492812f5e2a57645b313a9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:44.017607 systemd[1]: run-netns-cni\x2d9494cfd1\x2db79c\x2df1d1\x2df131\x2db3f84be0b667.mount: Deactivated successfully. Jan 20 02:25:44.066741 containerd[1640]: time="2026-01-20T02:25:44.056279680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34841658946524a26bf7ec5410fe5c3eac7481451492812f5e2a57645b313a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:44.066987 kubelet[3053]: E0120 02:25:44.065129 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34841658946524a26bf7ec5410fe5c3eac7481451492812f5e2a57645b313a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:44.066987 kubelet[3053]: E0120 02:25:44.065220 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34841658946524a26bf7ec5410fe5c3eac7481451492812f5e2a57645b313a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:44.066987 kubelet[3053]: E0120 02:25:44.065251 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34841658946524a26bf7ec5410fe5c3eac7481451492812f5e2a57645b313a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:25:44.069572 kubelet[3053]: E0120 02:25:44.065322 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34841658946524a26bf7ec5410fe5c3eac7481451492812f5e2a57645b313a9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:25:45.528697 kubelet[3053]: E0120 02:25:45.519145 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:46.773120 containerd[1640]: time="2026-01-20T02:25:46.772839168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:47.445338 containerd[1640]: time="2026-01-20T02:25:47.439395820Z" level=error msg="Failed to destroy network for sandbox \"5566302a42a1ff7fcd950dc66b784061ca68f136a4b76d425b02a5422e9a203a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:47.462811 systemd[1]: run-netns-cni\x2d1dc273ac\x2da20b\x2dba5b\x2d48d7\x2d06c39611302b.mount: Deactivated successfully. Jan 20 02:25:47.474739 containerd[1640]: time="2026-01-20T02:25:47.474630221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5566302a42a1ff7fcd950dc66b784061ca68f136a4b76d425b02a5422e9a203a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:47.480824 kubelet[3053]: E0120 02:25:47.475777 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5566302a42a1ff7fcd950dc66b784061ca68f136a4b76d425b02a5422e9a203a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:47.481441 kubelet[3053]: E0120 02:25:47.481029 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5566302a42a1ff7fcd950dc66b784061ca68f136a4b76d425b02a5422e9a203a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:47.481441 kubelet[3053]: E0120 02:25:47.481071 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5566302a42a1ff7fcd950dc66b784061ca68f136a4b76d425b02a5422e9a203a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:25:47.481441 kubelet[3053]: E0120 02:25:47.481189 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5566302a42a1ff7fcd950dc66b784061ca68f136a4b76d425b02a5422e9a203a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:25:47.591970 containerd[1640]: time="2026-01-20T02:25:47.591483209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:48.289080 containerd[1640]: time="2026-01-20T02:25:48.266144517Z" level=error msg="Failed to destroy network for sandbox \"f7177427e5347e47eb57b231ce5bcac358fd19d0baca55cb93ea155cd9686cb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:48.274515 systemd[1]: run-netns-cni\x2df0d2afdc\x2dfa27\x2d67bf\x2d604b\x2d57cba1200269.mount: Deactivated successfully. Jan 20 02:25:48.325403 containerd[1640]: time="2026-01-20T02:25:48.321821564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7177427e5347e47eb57b231ce5bcac358fd19d0baca55cb93ea155cd9686cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:48.325703 kubelet[3053]: E0120 02:25:48.322084 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7177427e5347e47eb57b231ce5bcac358fd19d0baca55cb93ea155cd9686cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:48.325703 kubelet[3053]: E0120 02:25:48.322207 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7177427e5347e47eb57b231ce5bcac358fd19d0baca55cb93ea155cd9686cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:48.325703 kubelet[3053]: E0120 02:25:48.322240 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7177427e5347e47eb57b231ce5bcac358fd19d0baca55cb93ea155cd9686cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:25:48.325880 kubelet[3053]: E0120 02:25:48.324458 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7177427e5347e47eb57b231ce5bcac358fd19d0baca55cb93ea155cd9686cb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd7f95659-84svg" podUID="4693d45f-0393-4dfc-8971-a75e80c17ac2" Jan 20 02:25:48.573602 containerd[1640]: time="2026-01-20T02:25:48.561513756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:49.282365 containerd[1640]: time="2026-01-20T02:25:49.274409416Z" level=error msg="Failed to destroy network for sandbox \"35a70bae3cd7d0f1db005cdf99d8b788f890755cd6d2acf5d895798e311bcf8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:49.291945 systemd[1]: run-netns-cni\x2d747bd115\x2de5e7\x2deff1\x2d5aa5\x2d97ddfde1a79f.mount: Deactivated successfully. Jan 20 02:25:49.327380 containerd[1640]: time="2026-01-20T02:25:49.326363057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a70bae3cd7d0f1db005cdf99d8b788f890755cd6d2acf5d895798e311bcf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:49.328159 kubelet[3053]: E0120 02:25:49.326810 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a70bae3cd7d0f1db005cdf99d8b788f890755cd6d2acf5d895798e311bcf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:49.328159 kubelet[3053]: E0120 02:25:49.326895 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a70bae3cd7d0f1db005cdf99d8b788f890755cd6d2acf5d895798e311bcf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:49.328159 kubelet[3053]: E0120 02:25:49.326930 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35a70bae3cd7d0f1db005cdf99d8b788f890755cd6d2acf5d895798e311bcf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:25:49.328766 kubelet[3053]: E0120 02:25:49.327006 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35a70bae3cd7d0f1db005cdf99d8b788f890755cd6d2acf5d895798e311bcf8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:25:51.526509 kubelet[3053]: E0120 02:25:51.524247 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:53.530704 kubelet[3053]: E0120 02:25:53.514033 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:54.607065 containerd[1640]: time="2026-01-20T02:25:54.606111583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:55.598353 kubelet[3053]: E0120 02:25:55.596955 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:55.632680 containerd[1640]: time="2026-01-20T02:25:55.632469065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:55.659413 containerd[1640]: time="2026-01-20T02:25:55.657391728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,}" Jan 20 02:25:56.173447 containerd[1640]: time="2026-01-20T02:25:56.173351469Z" level=error msg="Failed to destroy network for sandbox \"b0dc05e73dd42df59bb43b393b793ccc486066c2c1d5f848fde293ab926e393a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:56.178910 systemd[1]: run-netns-cni\x2d615b8c0b\x2d1811\x2d5cf4\x2d0bd6\x2dd94f3f183626.mount: Deactivated successfully. Jan 20 02:25:56.449859 containerd[1640]: time="2026-01-20T02:25:56.447454258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc05e73dd42df59bb43b393b793ccc486066c2c1d5f848fde293ab926e393a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:56.482581 kubelet[3053]: E0120 02:25:56.482140 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc05e73dd42df59bb43b393b793ccc486066c2c1d5f848fde293ab926e393a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:56.485636 kubelet[3053]: E0120 02:25:56.485457 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc05e73dd42df59bb43b393b793ccc486066c2c1d5f848fde293ab926e393a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:56.485636 kubelet[3053]: E0120 02:25:56.485629 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dc05e73dd42df59bb43b393b793ccc486066c2c1d5f848fde293ab926e393a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tfwc7" Jan 20 02:25:56.486289 kubelet[3053]: E0120 02:25:56.485930 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0dc05e73dd42df59bb43b393b793ccc486066c2c1d5f848fde293ab926e393a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:25:57.464363 containerd[1640]: time="2026-01-20T02:25:57.458901688Z" level=error msg="Failed to destroy network for sandbox \"d1f76d0377f614af8662bfe6184a701f7bda32a93ac98b1d349a9e3ad6b980bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:57.463144 systemd[1]: run-netns-cni\x2dbd4dce4f\x2d9bdf\x2d1198\x2df4df\x2d25a8aca5d90c.mount: Deactivated successfully. Jan 20 02:25:57.494366 containerd[1640]: time="2026-01-20T02:25:57.491401528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f76d0377f614af8662bfe6184a701f7bda32a93ac98b1d349a9e3ad6b980bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:57.494665 kubelet[3053]: E0120 02:25:57.493297 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f76d0377f614af8662bfe6184a701f7bda32a93ac98b1d349a9e3ad6b980bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:57.494665 kubelet[3053]: E0120 02:25:57.493381 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f76d0377f614af8662bfe6184a701f7bda32a93ac98b1d349a9e3ad6b980bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:57.494665 kubelet[3053]: E0120 02:25:57.493417 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f76d0377f614af8662bfe6184a701f7bda32a93ac98b1d349a9e3ad6b980bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" Jan 20 02:25:57.495365 kubelet[3053]: E0120 02:25:57.493510 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1f76d0377f614af8662bfe6184a701f7bda32a93ac98b1d349a9e3ad6b980bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:25:57.545928 kubelet[3053]: E0120 02:25:57.536354 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:25:57.546082 containerd[1640]: time="2026-01-20T02:25:57.542195180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,}" Jan 20 02:25:57.814718 containerd[1640]: time="2026-01-20T02:25:57.814648425Z" level=error msg="Failed to destroy network for sandbox \"0e838ec87bf17c013aa829f593e1b38500e43a7a019cec6a983da24146541143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:57.863702 systemd[1]: run-netns-cni\x2d02ba599c\x2d6a83\x2d5d5f\x2d2170\x2d753cbc84b4f1.mount: Deactivated successfully. Jan 20 02:25:58.045710 containerd[1640]: time="2026-01-20T02:25:58.045463909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e838ec87bf17c013aa829f593e1b38500e43a7a019cec6a983da24146541143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:58.045997 kubelet[3053]: E0120 02:25:58.045864 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e838ec87bf17c013aa829f593e1b38500e43a7a019cec6a983da24146541143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:58.045997 kubelet[3053]: E0120 02:25:58.045928 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e838ec87bf17c013aa829f593e1b38500e43a7a019cec6a983da24146541143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:58.045997 kubelet[3053]: E0120 02:25:58.045950 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e838ec87bf17c013aa829f593e1b38500e43a7a019cec6a983da24146541143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m48wx" Jan 20 02:25:58.046208 kubelet[3053]: E0120 02:25:58.046013 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m48wx_kube-system(7d7d1be6-3f16-4e7b-a636-54909266045f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e838ec87bf17c013aa829f593e1b38500e43a7a019cec6a983da24146541143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m48wx" podUID="7d7d1be6-3f16-4e7b-a636-54909266045f" Jan 20 02:25:58.472725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894923443.mount: Deactivated successfully. Jan 20 02:25:58.591156 containerd[1640]: time="2026-01-20T02:25:58.589674682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:25:58.826608 containerd[1640]: time="2026-01-20T02:25:58.803349447Z" level=error msg="Failed to destroy network for sandbox \"640025763cceef419c114031b7cad1d2bf687d2efa6213d10dde9caae69086e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:58.836875 systemd[1]: run-netns-cni\x2d6f8d3f78\x2de8f1\x2df9dc\x2d18e3\x2d9b72477e90bc.mount: Deactivated successfully. Jan 20 02:25:59.051810 containerd[1640]: time="2026-01-20T02:25:59.049459689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"640025763cceef419c114031b7cad1d2bf687d2efa6213d10dde9caae69086e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:59.064866 kubelet[3053]: E0120 02:25:59.064620 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640025763cceef419c114031b7cad1d2bf687d2efa6213d10dde9caae69086e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:25:59.078205 kubelet[3053]: E0120 02:25:59.065006 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640025763cceef419c114031b7cad1d2bf687d2efa6213d10dde9caae69086e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:59.078329 containerd[1640]: time="2026-01-20T02:25:59.073775994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:59.080055 kubelet[3053]: E0120 02:25:59.065246 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640025763cceef419c114031b7cad1d2bf687d2efa6213d10dde9caae69086e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4cpfh" Jan 20 02:25:59.082027 containerd[1640]: time="2026-01-20T02:25:59.081825730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 02:25:59.082162 kubelet[3053]: E0120 02:25:59.081449 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4cpfh_kube-system(bc4468de-9eba-48b5-88c5-c38dc3f08d39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"640025763cceef419c114031b7cad1d2bf687d2efa6213d10dde9caae69086e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4cpfh" podUID="bc4468de-9eba-48b5-88c5-c38dc3f08d39" Jan 20 02:25:59.132399 containerd[1640]: time="2026-01-20T02:25:59.128735465Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:59.143576 containerd[1640]: time="2026-01-20T02:25:59.142979566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 02:25:59.145838 containerd[1640]: time="2026-01-20T02:25:59.144606580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 1m6.930978417s" Jan 20 02:25:59.145838 containerd[1640]: time="2026-01-20T02:25:59.144766590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 02:25:59.235340 containerd[1640]: time="2026-01-20T02:25:59.235110250Z" level=info msg="CreateContainer within sandbox \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 02:25:59.373908 containerd[1640]: time="2026-01-20T02:25:59.365622039Z" level=info msg="Container 8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:26:00.385723 containerd[1640]: time="2026-01-20T02:26:00.360343037Z" level=info msg="container event discarded" container=39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac type=CONTAINER_CREATED_EVENT Jan 20 02:26:00.476657 containerd[1640]: time="2026-01-20T02:26:00.476587646Z" level=info msg="container event discarded" container=39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac type=CONTAINER_STARTED_EVENT Jan 20 02:26:00.487787 containerd[1640]: time="2026-01-20T02:26:00.487691167Z" level=info msg="CreateContainer within sandbox \"ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70\"" Jan 20 02:26:00.504609 containerd[1640]: time="2026-01-20T02:26:00.489628914Z" level=info msg="StartContainer for \"8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70\"" Jan 20 02:26:00.504609 containerd[1640]: time="2026-01-20T02:26:00.493880257Z" level=info msg="connecting to shim 8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70" address="unix:///run/containerd/s/463fc380af1408ace911e3c035f7b7e73e24b72364a7b26d865255c7456e3318" protocol=ttrpc version=3 Jan 20 02:26:00.504609 containerd[1640]: time="2026-01-20T02:26:00.497730327Z" level=info msg="container event discarded" container=35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a type=CONTAINER_CREATED_EVENT Jan 20 02:26:00.504609 containerd[1640]: time="2026-01-20T02:26:00.497754331Z" level=info msg="container event discarded" container=35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a type=CONTAINER_STARTED_EVENT Jan 20 02:26:00.514700 containerd[1640]: time="2026-01-20T02:26:00.514613797Z" level=info msg="container event discarded" container=86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a type=CONTAINER_CREATED_EVENT Jan 20 02:26:00.514700 containerd[1640]: time="2026-01-20T02:26:00.514659713Z" level=info msg="container event discarded" container=86b6c08de92f4425beb9a72dc855bd6bdee335021439b6067d1c6126732f290a type=CONTAINER_STARTED_EVENT Jan 20 02:26:00.612199 containerd[1640]: time="2026-01-20T02:26:00.606271899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,}" Jan 20 02:26:00.779783 containerd[1640]: time="2026-01-20T02:26:00.778504254Z" level=info msg="container event discarded" container=abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7 type=CONTAINER_CREATED_EVENT Jan 20 02:26:00.939651 containerd[1640]: time="2026-01-20T02:26:00.939578839Z" level=info msg="container event discarded" container=bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9 type=CONTAINER_CREATED_EVENT Jan 20 02:26:00.999186 containerd[1640]: time="2026-01-20T02:26:00.993199431Z" level=info msg="container event discarded" container=32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942 type=CONTAINER_CREATED_EVENT Jan 20 02:26:01.117714 containerd[1640]: time="2026-01-20T02:26:01.114982209Z" level=error msg="Failed to destroy network for sandbox \"ba85a5959b0bec57259b3b5e4131ecbcab23124042ec329710ba91b316e4f1af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:01.154403 systemd[1]: run-netns-cni\x2dd7d3c5d2\x2dd62b\x2da950\x2dbbc0\x2d25b9c5027abc.mount: Deactivated successfully. Jan 20 02:26:01.311129 containerd[1640]: time="2026-01-20T02:26:01.308212664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba85a5959b0bec57259b3b5e4131ecbcab23124042ec329710ba91b316e4f1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:01.365946 kubelet[3053]: E0120 02:26:01.348006 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba85a5959b0bec57259b3b5e4131ecbcab23124042ec329710ba91b316e4f1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:01.365946 kubelet[3053]: E0120 02:26:01.348280 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba85a5959b0bec57259b3b5e4131ecbcab23124042ec329710ba91b316e4f1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:26:01.365946 kubelet[3053]: E0120 02:26:01.360693 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba85a5959b0bec57259b3b5e4131ecbcab23124042ec329710ba91b316e4f1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" Jan 20 02:26:01.369177 kubelet[3053]: E0120 02:26:01.360896 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba85a5959b0bec57259b3b5e4131ecbcab23124042ec329710ba91b316e4f1af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:26:01.386967 systemd[1]: Started cri-containerd-8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70.scope - libcontainer container 8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70. Jan 20 02:26:01.575577 containerd[1640]: time="2026-01-20T02:26:01.569402493Z" level=error msg="Failed to destroy network for sandbox \"05d33408276cfca56163d369e3e09d2030347093eb8257945990f06c633119f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:01.582028 systemd[1]: run-netns-cni\x2d56c209e1\x2de6f7\x2d9b95\x2d66e6\x2d7cb39908dc6f.mount: Deactivated successfully. Jan 20 02:26:01.628614 containerd[1640]: time="2026-01-20T02:26:01.621840911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd7f95659-84svg,Uid:4693d45f-0393-4dfc-8971-a75e80c17ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d33408276cfca56163d369e3e09d2030347093eb8257945990f06c633119f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:01.629805 kubelet[3053]: E0120 02:26:01.629754 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d33408276cfca56163d369e3e09d2030347093eb8257945990f06c633119f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:01.630061 kubelet[3053]: E0120 02:26:01.630028 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d33408276cfca56163d369e3e09d2030347093eb8257945990f06c633119f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:26:01.632379 kubelet[3053]: E0120 02:26:01.632295 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05d33408276cfca56163d369e3e09d2030347093eb8257945990f06c633119f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd7f95659-84svg" Jan 20 02:26:01.636149 kubelet[3053]: E0120 02:26:01.634772 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd7f95659-84svg_calico-system(4693d45f-0393-4dfc-8971-a75e80c17ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05d33408276cfca56163d369e3e09d2030347093eb8257945990f06c633119f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd7f95659-84svg" podUID="4693d45f-0393-4dfc-8971-a75e80c17ac2" Jan 20 02:26:01.987933 containerd[1640]: time="2026-01-20T02:26:01.977883028Z" level=info msg="container event discarded" container=abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7 type=CONTAINER_STARTED_EVENT Jan 20 02:26:02.015632 containerd[1640]: time="2026-01-20T02:26:02.014302196Z" level=info msg="container event discarded" container=bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9 type=CONTAINER_STARTED_EVENT Jan 20 02:26:02.226254 containerd[1640]: time="2026-01-20T02:26:02.226171754Z" level=info msg="container event discarded" container=32d6d9bcb75d8620a69a4d881af90f5c12579722521c645ce54e4e0f27cd4942 type=CONTAINER_STARTED_EVENT Jan 20 02:26:02.475000 audit: BPF prog-id=175 op=LOAD Jan 20 02:26:02.475000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00022a488 a2=98 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.557387 kernel: audit: type=1334 audit(1768875962.475:589): prog-id=175 op=LOAD Jan 20 02:26:02.557668 kernel: audit: type=1300 audit(1768875962.475:589): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00022a488 a2=98 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.620908 kernel: audit: type=1327 audit(1768875962.475:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.621062 kernel: audit: type=1334 audit(1768875962.475:590): prog-id=176 op=LOAD Jan 20 02:26:02.475000 audit: BPF prog-id=176 op=LOAD Jan 20 02:26:02.627831 containerd[1640]: time="2026-01-20T02:26:02.622014525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:26:02.475000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00022a218 a2=98 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.684133 containerd[1640]: time="2026-01-20T02:26:02.683636134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:26:02.727085 kernel: audit: type=1300 audit(1768875962.475:590): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00022a218 a2=98 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.727230 kernel: audit: type=1327 audit(1768875962.475:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.475000 audit: BPF prog-id=176 op=UNLOAD Jan 20 02:26:02.794717 kernel: audit: type=1334 audit(1768875962.475:591): prog-id=176 op=UNLOAD Jan 20 02:26:02.475000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.870492 kernel: audit: type=1300 audit(1768875962.475:591): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.870693 kernel: audit: type=1327 audit(1768875962.475:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.475000 audit: BPF prog-id=175 op=UNLOAD Jan 20 02:26:02.475000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.475000 audit: BPF prog-id=177 op=LOAD Jan 20 02:26:02.475000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00022a6e8 a2=98 a3=0 items=0 ppid=3643 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:02.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865383538303261656565333734623164653964633133373466653062 Jan 20 02:26:02.944173 kernel: audit: type=1334 audit(1768875962.475:592): prog-id=175 op=UNLOAD Jan 20 02:26:03.025501 containerd[1640]: time="2026-01-20T02:26:03.021594182Z" level=info msg="StartContainer for \"8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70\" returns successfully" Jan 20 02:26:03.480591 containerd[1640]: time="2026-01-20T02:26:03.480257075Z" level=error msg="Failed to destroy network for sandbox \"af4d54256cfb1155c566f5bf7a18f3cf3eb45119979d75085355298d67ab5c11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:03.497212 systemd[1]: run-netns-cni\x2d0ba8b4b8\x2dd08a\x2d0d53\x2d15cc\x2dbb48f330e83c.mount: Deactivated successfully. Jan 20 02:26:03.539683 kubelet[3053]: E0120 02:26:03.533749 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:03.577479 containerd[1640]: time="2026-01-20T02:26:03.576501981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4d54256cfb1155c566f5bf7a18f3cf3eb45119979d75085355298d67ab5c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:03.579795 kubelet[3053]: E0120 02:26:03.579681 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4d54256cfb1155c566f5bf7a18f3cf3eb45119979d75085355298d67ab5c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:03.579795 kubelet[3053]: E0120 02:26:03.579775 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4d54256cfb1155c566f5bf7a18f3cf3eb45119979d75085355298d67ab5c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:26:03.579943 kubelet[3053]: E0120 02:26:03.579802 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4d54256cfb1155c566f5bf7a18f3cf3eb45119979d75085355298d67ab5c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9lglv" Jan 20 02:26:03.579943 kubelet[3053]: E0120 02:26:03.579866 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af4d54256cfb1155c566f5bf7a18f3cf3eb45119979d75085355298d67ab5c11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:26:03.721271 containerd[1640]: time="2026-01-20T02:26:03.719030481Z" level=error msg="Failed to destroy network for sandbox \"d488c781114c66f5f2883ccba209f9e493ee4c08990904887f84da7639aa231e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:03.723482 systemd[1]: run-netns-cni\x2deb548c34\x2dd3e8\x2df623\x2d2388\x2d7c54ede0247b.mount: Deactivated successfully. Jan 20 02:26:03.834394 containerd[1640]: time="2026-01-20T02:26:03.796200743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d488c781114c66f5f2883ccba209f9e493ee4c08990904887f84da7639aa231e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:03.835227 kubelet[3053]: E0120 02:26:03.825967 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d488c781114c66f5f2883ccba209f9e493ee4c08990904887f84da7639aa231e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 02:26:03.835227 kubelet[3053]: E0120 02:26:03.829676 3053 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d488c781114c66f5f2883ccba209f9e493ee4c08990904887f84da7639aa231e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:26:03.835227 kubelet[3053]: E0120 02:26:03.829726 3053 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d488c781114c66f5f2883ccba209f9e493ee4c08990904887f84da7639aa231e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" Jan 20 02:26:03.894778 kubelet[3053]: E0120 02:26:03.829882 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d488c781114c66f5f2883ccba209f9e493ee4c08990904887f84da7639aa231e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:26:03.965847 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 02:26:03.988766 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 02:26:04.026641 kubelet[3053]: E0120 02:26:04.025087 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:05.075018 kubelet[3053]: E0120 02:26:05.074727 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:05.834588 kubelet[3053]: I0120 02:26:05.834124 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cwnc4" podStartSLOduration=9.44599653 podStartE2EDuration="2m26.834100484s" podCreationTimestamp="2026-01-20 02:23:39 +0000 UTC" firstStartedPulling="2026-01-20 02:23:41.765606919 +0000 UTC m=+112.664249118" lastFinishedPulling="2026-01-20 02:25:59.153710874 +0000 UTC m=+250.052353072" observedRunningTime="2026-01-20 02:26:04.17690682 +0000 UTC m=+255.075549029" watchObservedRunningTime="2026-01-20 02:26:05.834100484 +0000 UTC m=+256.732742683" Jan 20 02:26:06.025048 kubelet[3053]: I0120 02:26:06.024609 3053 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-ca-bundle\") pod \"4693d45f-0393-4dfc-8971-a75e80c17ac2\" (UID: \"4693d45f-0393-4dfc-8971-a75e80c17ac2\") " Jan 20 02:26:06.086888 kubelet[3053]: I0120 02:26:06.033159 3053 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-backend-key-pair\") pod \"4693d45f-0393-4dfc-8971-a75e80c17ac2\" (UID: \"4693d45f-0393-4dfc-8971-a75e80c17ac2\") " Jan 20 02:26:06.086888 kubelet[3053]: I0120 02:26:06.033234 3053 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg6km\" (UniqueName: \"kubernetes.io/projected/4693d45f-0393-4dfc-8971-a75e80c17ac2-kube-api-access-mg6km\") pod \"4693d45f-0393-4dfc-8971-a75e80c17ac2\" (UID: \"4693d45f-0393-4dfc-8971-a75e80c17ac2\") " Jan 20 02:26:06.086888 kubelet[3053]: I0120 02:26:06.054043 3053 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4693d45f-0393-4dfc-8971-a75e80c17ac2" (UID: "4693d45f-0393-4dfc-8971-a75e80c17ac2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 02:26:06.122990 kubelet[3053]: I0120 02:26:06.099197 3053 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4693d45f-0393-4dfc-8971-a75e80c17ac2-kube-api-access-mg6km" (OuterVolumeSpecName: "kube-api-access-mg6km") pod "4693d45f-0393-4dfc-8971-a75e80c17ac2" (UID: "4693d45f-0393-4dfc-8971-a75e80c17ac2"). InnerVolumeSpecName "kube-api-access-mg6km". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 02:26:06.122990 kubelet[3053]: I0120 02:26:06.110642 3053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 02:26:06.122990 kubelet[3053]: E0120 02:26:06.118137 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:06.113198 systemd[1]: var-lib-kubelet-pods-4693d45f\x2d0393\x2d4dfc\x2d8971\x2da75e80c17ac2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmg6km.mount: Deactivated successfully. Jan 20 02:26:06.210290 kubelet[3053]: I0120 02:26:06.139001 3053 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mg6km\" (UniqueName: \"kubernetes.io/projected/4693d45f-0393-4dfc-8971-a75e80c17ac2-kube-api-access-mg6km\") on node \"localhost\" DevicePath \"\"" Jan 20 02:26:06.210290 kubelet[3053]: I0120 02:26:06.139068 3053 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 02:26:06.351156 systemd[1]: var-lib-kubelet-pods-4693d45f\x2d0393\x2d4dfc\x2d8971\x2da75e80c17ac2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 02:26:06.377990 kubelet[3053]: I0120 02:26:06.372872 3053 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4693d45f-0393-4dfc-8971-a75e80c17ac2" (UID: "4693d45f-0393-4dfc-8971-a75e80c17ac2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 02:26:06.470023 kubelet[3053]: I0120 02:26:06.462029 3053 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4693d45f-0393-4dfc-8971-a75e80c17ac2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 02:26:06.711050 systemd[1]: Removed slice kubepods-besteffort-pod4693d45f_0393_4dfc_8971_a75e80c17ac2.slice - libcontainer container kubepods-besteffort-pod4693d45f_0393_4dfc_8971_a75e80c17ac2.slice. Jan 20 02:26:07.519224 kubelet[3053]: I0120 02:26:07.519172 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75bc6f23-38ce-4e96-aaf1-83d653850866-whisker-ca-bundle\") pod \"whisker-749b857967-xt4pg\" (UID: \"75bc6f23-38ce-4e96-aaf1-83d653850866\") " pod="calico-system/whisker-749b857967-xt4pg" Jan 20 02:26:07.527427 kubelet[3053]: I0120 02:26:07.527208 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/75bc6f23-38ce-4e96-aaf1-83d653850866-whisker-backend-key-pair\") pod \"whisker-749b857967-xt4pg\" (UID: \"75bc6f23-38ce-4e96-aaf1-83d653850866\") " pod="calico-system/whisker-749b857967-xt4pg" Jan 20 02:26:07.527427 kubelet[3053]: I0120 02:26:07.527289 3053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttjw\" (UniqueName: \"kubernetes.io/projected/75bc6f23-38ce-4e96-aaf1-83d653850866-kube-api-access-fttjw\") pod \"whisker-749b857967-xt4pg\" (UID: \"75bc6f23-38ce-4e96-aaf1-83d653850866\") " pod="calico-system/whisker-749b857967-xt4pg" Jan 20 02:26:07.540059 systemd[1]: Created slice kubepods-besteffort-pod75bc6f23_38ce_4e96_aaf1_83d653850866.slice - libcontainer container kubepods-besteffort-pod75bc6f23_38ce_4e96_aaf1_83d653850866.slice. Jan 20 02:26:08.475641 containerd[1640]: time="2026-01-20T02:26:08.468113608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-749b857967-xt4pg,Uid:75bc6f23-38ce-4e96-aaf1-83d653850866,Namespace:calico-system,Attempt:0,}" Jan 20 02:26:08.666594 kubelet[3053]: I0120 02:26:08.665788 3053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4693d45f-0393-4dfc-8971-a75e80c17ac2" path="/var/lib/kubelet/pods/4693d45f-0393-4dfc-8971-a75e80c17ac2/volumes" Jan 20 02:26:09.562596 kubelet[3053]: E0120 02:26:09.560192 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:10.630299 containerd[1640]: time="2026-01-20T02:26:10.622589732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,}" Jan 20 02:26:10.701556 kubelet[3053]: E0120 02:26:10.698645 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:10.702067 containerd[1640]: time="2026-01-20T02:26:10.700113663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,}" Jan 20 02:26:11.552700 kubelet[3053]: E0120 02:26:11.552651 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:11.574588 containerd[1640]: time="2026-01-20T02:26:11.574305201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,}" Jan 20 02:26:11.774680 systemd-networkd[1538]: cali754efec106c: Link UP Jan 20 02:26:11.826862 systemd-networkd[1538]: cali754efec106c: Gained carrier Jan 20 02:26:12.154198 containerd[1640]: 2026-01-20 02:26:08.963 [INFO][5345] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:12.154198 containerd[1640]: 2026-01-20 02:26:09.471 [INFO][5345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--749b857967--xt4pg-eth0 whisker-749b857967- calico-system 75bc6f23-38ce-4e96-aaf1-83d653850866 1427 0 2026-01-20 02:26:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:749b857967 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-749b857967-xt4pg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali754efec106c [] [] }} ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-" Jan 20 02:26:12.154198 containerd[1640]: 2026-01-20 02:26:09.471 [INFO][5345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.154198 containerd[1640]: 2026-01-20 02:26:10.946 [INFO][5365] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" HandleID="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Workload="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:10.948 [INFO][5365] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" HandleID="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Workload="localhost-k8s-whisker--749b857967--xt4pg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001925f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-749b857967-xt4pg", "timestamp":"2026-01-20 02:26:10.946267456 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:10.948 [INFO][5365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:10.948 [INFO][5365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:10.949 [INFO][5365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:11.041 [INFO][5365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" host="localhost" Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:11.125 [INFO][5365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:11.217 [INFO][5365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:11.228 [INFO][5365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:11.248 [INFO][5365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:12.208419 containerd[1640]: 2026-01-20 02:26:11.248 [INFO][5365] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" host="localhost" Jan 20 02:26:12.229213 containerd[1640]: 2026-01-20 02:26:11.275 [INFO][5365] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34 Jan 20 02:26:12.229213 containerd[1640]: 2026-01-20 02:26:11.346 [INFO][5365] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" host="localhost" Jan 20 02:26:12.229213 containerd[1640]: 2026-01-20 02:26:11.421 [INFO][5365] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" host="localhost" Jan 20 02:26:12.229213 containerd[1640]: 2026-01-20 02:26:11.421 [INFO][5365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" host="localhost" Jan 20 02:26:12.229213 containerd[1640]: 2026-01-20 02:26:11.421 [INFO][5365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:12.229213 containerd[1640]: 2026-01-20 02:26:11.421 [INFO][5365] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" HandleID="k8s-pod-network.70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Workload="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.229421 containerd[1640]: 2026-01-20 02:26:11.453 [INFO][5345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--749b857967--xt4pg-eth0", GenerateName:"whisker-749b857967-", Namespace:"calico-system", SelfLink:"", UID:"75bc6f23-38ce-4e96-aaf1-83d653850866", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"749b857967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-749b857967-xt4pg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali754efec106c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:12.229421 containerd[1640]: 2026-01-20 02:26:11.454 [INFO][5345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.229679 containerd[1640]: 2026-01-20 02:26:11.454 [INFO][5345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali754efec106c ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.229679 containerd[1640]: 2026-01-20 02:26:11.876 [INFO][5345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.229756 containerd[1640]: 2026-01-20 02:26:11.888 [INFO][5345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--749b857967--xt4pg-eth0", GenerateName:"whisker-749b857967-", Namespace:"calico-system", SelfLink:"", UID:"75bc6f23-38ce-4e96-aaf1-83d653850866", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"749b857967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34", Pod:"whisker-749b857967-xt4pg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali754efec106c", MAC:"3a:d9:7d:1d:d7:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:12.229860 containerd[1640]: 2026-01-20 02:26:12.060 [INFO][5345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" Namespace="calico-system" Pod="whisker-749b857967-xt4pg" WorkloadEndpoint="localhost-k8s-whisker--749b857967--xt4pg-eth0" Jan 20 02:26:12.648815 containerd[1640]: time="2026-01-20T02:26:12.625298470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:26:12.735123 containerd[1640]: time="2026-01-20T02:26:12.720709067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,}" Jan 20 02:26:12.927227 systemd-networkd[1538]: cali2ec49cb7b3b: Link UP Jan 20 02:26:12.931817 systemd-networkd[1538]: cali2ec49cb7b3b: Gained carrier Jan 20 02:26:13.169578 containerd[1640]: 2026-01-20 02:26:11.113 [INFO][5395] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:13.169578 containerd[1640]: 2026-01-20 02:26:11.255 [INFO][5395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--m48wx-eth0 coredns-66bc5c9577- kube-system 7d7d1be6-3f16-4e7b-a636-54909266045f 1208 0 2026-01-20 02:21:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-m48wx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ec49cb7b3b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-" Jan 20 02:26:13.169578 containerd[1640]: 2026-01-20 02:26:11.261 [INFO][5395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.169578 containerd[1640]: 2026-01-20 02:26:11.587 [INFO][5423] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" HandleID="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Workload="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:11.588 [INFO][5423] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" HandleID="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Workload="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c6180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-m48wx", "timestamp":"2026-01-20 02:26:11.587034095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:11.625 [INFO][5423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:11.634 [INFO][5423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:11.634 [INFO][5423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:11.756 [INFO][5423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" host="localhost" Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:12.012 [INFO][5423] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:12.185 [INFO][5423] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:12.292 [INFO][5423] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:12.400 [INFO][5423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:13.170619 containerd[1640]: 2026-01-20 02:26:12.400 [INFO][5423] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" host="localhost" Jan 20 02:26:13.173587 containerd[1640]: 2026-01-20 02:26:12.438 [INFO][5423] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191 Jan 20 02:26:13.173587 containerd[1640]: 2026-01-20 02:26:12.493 [INFO][5423] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" host="localhost" Jan 20 02:26:13.173587 containerd[1640]: 2026-01-20 02:26:12.586 [INFO][5423] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" host="localhost" Jan 20 02:26:13.173587 containerd[1640]: 2026-01-20 02:26:12.588 [INFO][5423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" host="localhost" Jan 20 02:26:13.173587 containerd[1640]: 2026-01-20 02:26:12.588 [INFO][5423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:13.173587 containerd[1640]: 2026-01-20 02:26:12.588 [INFO][5423] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" HandleID="k8s-pod-network.e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Workload="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.175186 containerd[1640]: 2026-01-20 02:26:12.798 [INFO][5395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--m48wx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7d7d1be6-3f16-4e7b-a636-54909266045f", ResourceVersion:"1208", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-m48wx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ec49cb7b3b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:13.175186 containerd[1640]: 2026-01-20 02:26:12.798 [INFO][5395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.175186 containerd[1640]: 2026-01-20 02:26:12.799 [INFO][5395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ec49cb7b3b ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.175186 containerd[1640]: 2026-01-20 02:26:12.928 [INFO][5395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.175186 containerd[1640]: 2026-01-20 02:26:12.928 [INFO][5395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--m48wx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7d7d1be6-3f16-4e7b-a636-54909266045f", ResourceVersion:"1208", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191", Pod:"coredns-66bc5c9577-m48wx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ec49cb7b3b", MAC:"1a:4d:20:8f:e3:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:13.175186 containerd[1640]: 2026-01-20 02:26:13.020 [INFO][5395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" Namespace="kube-system" Pod="coredns-66bc5c9577-m48wx" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--m48wx-eth0" Jan 20 02:26:13.609636 systemd-networkd[1538]: cali754efec106c: Gained IPv6LL Jan 20 02:26:13.886081 systemd-networkd[1538]: cali5dcdd25e88e: Link UP Jan 20 02:26:13.886820 systemd-networkd[1538]: cali5dcdd25e88e: Gained carrier Jan 20 02:26:14.458812 containerd[1640]: time="2026-01-20T02:26:14.458738459Z" level=info msg="connecting to shim e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191" address="unix:///run/containerd/s/75d17eccaf35dbf9ceaa4f9429e9c4d9d70914ac47af1548d3a1e6afaf16573b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:10.971 [INFO][5382] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:11.134 [INFO][5382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--tfwc7-eth0 goldmane-7c778bb748- calico-system 4892884d-a213-4dd6-ab53-844c331ae6d1 1212 0 2026-01-20 02:23:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-tfwc7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5dcdd25e88e [] [] }} ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:11.134 [INFO][5382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:11.721 [INFO][5417] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" HandleID="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Workload="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:11.721 [INFO][5417] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" HandleID="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Workload="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-tfwc7", "timestamp":"2026-01-20 02:26:11.721616369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:11.721 [INFO][5417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:12.620 [INFO][5417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:12.621 [INFO][5417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:12.803 [INFO][5417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.015 [INFO][5417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.198 [INFO][5417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.315 [INFO][5417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.502 [INFO][5417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.502 [INFO][5417] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.590 [INFO][5417] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672 Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.709 [INFO][5417] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.795 [INFO][5417] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.795 [INFO][5417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" host="localhost" Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.795 [INFO][5417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:14.640351 containerd[1640]: 2026-01-20 02:26:13.795 [INFO][5417] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" HandleID="k8s-pod-network.a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Workload="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.816264 containerd[1640]: 2026-01-20 02:26:13.828 [INFO][5382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--tfwc7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4892884d-a213-4dd6-ab53-844c331ae6d1", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-tfwc7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5dcdd25e88e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:14.816264 containerd[1640]: 2026-01-20 02:26:13.833 [INFO][5382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.816264 containerd[1640]: 2026-01-20 02:26:13.834 [INFO][5382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dcdd25e88e ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.816264 containerd[1640]: 2026-01-20 02:26:13.884 [INFO][5382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.816264 containerd[1640]: 2026-01-20 02:26:13.904 [INFO][5382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--tfwc7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4892884d-a213-4dd6-ab53-844c331ae6d1", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672", Pod:"goldmane-7c778bb748-tfwc7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5dcdd25e88e", MAC:"ca:78:c3:23:47:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:14.816264 containerd[1640]: 2026-01-20 02:26:14.227 [INFO][5382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" Namespace="calico-system" Pod="goldmane-7c778bb748-tfwc7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--tfwc7-eth0" Jan 20 02:26:14.816264 containerd[1640]: time="2026-01-20T02:26:14.807160137Z" level=info msg="connecting to shim 70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34" address="unix:///run/containerd/s/bbec2a6537edfe43699bedcc745933b61ac322d41889ec942def61ecc123875c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:14.736748 systemd-networkd[1538]: cali2ec49cb7b3b: Gained IPv6LL Jan 20 02:26:16.005801 systemd-networkd[1538]: cali5dcdd25e88e: Gained IPv6LL Jan 20 02:26:16.320455 containerd[1640]: time="2026-01-20T02:26:16.302734749Z" level=info msg="connecting to shim a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672" address="unix:///run/containerd/s/d14f85172b99a5da7f5bee932a50a2527aca58ec1ecf31619fd5c43ef0493053" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:16.825432 containerd[1640]: time="2026-01-20T02:26:16.823023625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,}" Jan 20 02:26:16.824638 systemd[1]: Started cri-containerd-e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191.scope - libcontainer container e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191. Jan 20 02:26:17.323913 systemd-networkd[1538]: cali86f30cbaa3a: Link UP Jan 20 02:26:17.486220 systemd-networkd[1538]: cali86f30cbaa3a: Gained carrier Jan 20 02:26:17.503582 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 02:26:17.503771 kernel: audit: type=1334 audit(1768875977.482:594): prog-id=178 op=LOAD Jan 20 02:26:17.482000 audit: BPF prog-id=178 op=LOAD Jan 20 02:26:17.521000 audit: BPF prog-id=179 op=LOAD Jan 20 02:26:17.565002 kernel: audit: type=1334 audit(1768875977.521:595): prog-id=179 op=LOAD Jan 20 02:26:17.624912 kernel: audit: type=1300 audit(1768875977.521:595): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.521000 audit[5663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.766680 kernel: audit: type=1327 audit(1768875977.521:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.766777 kernel: audit: type=1334 audit(1768875977.563:596): prog-id=179 op=UNLOAD Jan 20 02:26:17.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.563000 audit: BPF prog-id=179 op=UNLOAD Jan 20 02:26:17.660869 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:17.767802 containerd[1640]: time="2026-01-20T02:26:17.651353962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,}" Jan 20 02:26:17.563000 audit[5663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:12.365 [INFO][5432] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:12.475 [INFO][5432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--4cpfh-eth0 coredns-66bc5c9577- kube-system bc4468de-9eba-48b5-88c5-c38dc3f08d39 1219 0 2026-01-20 02:21:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-4cpfh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86f30cbaa3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:12.477 [INFO][5432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:13.676 [INFO][5548] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" HandleID="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Workload="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:13.677 [INFO][5548] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" HandleID="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Workload="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fae0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-4cpfh", "timestamp":"2026-01-20 02:26:13.676950221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:13.677 [INFO][5548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:13.796 [INFO][5548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:13.796 [INFO][5548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:14.346 [INFO][5548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:14.508 [INFO][5548] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:15.719 [INFO][5548] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:15.817 [INFO][5548] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.119 [INFO][5548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.120 [INFO][5548] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.189 [INFO][5548] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.353 [INFO][5548] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.902 [INFO][5548] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.927 [INFO][5548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" host="localhost" Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.927 [INFO][5548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:17.789612 containerd[1640]: 2026-01-20 02:26:16.927 [INFO][5548] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" HandleID="k8s-pod-network.0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Workload="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.790437 containerd[1640]: 2026-01-20 02:26:16.968 [INFO][5432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4cpfh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bc4468de-9eba-48b5-88c5-c38dc3f08d39", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-4cpfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f30cbaa3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:17.790437 containerd[1640]: 2026-01-20 02:26:16.968 [INFO][5432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.790437 containerd[1640]: 2026-01-20 02:26:16.969 [INFO][5432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86f30cbaa3a ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.790437 containerd[1640]: 2026-01-20 02:26:17.491 [INFO][5432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.790437 containerd[1640]: 2026-01-20 02:26:17.514 [INFO][5432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--4cpfh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bc4468de-9eba-48b5-88c5-c38dc3f08d39", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f", Pod:"coredns-66bc5c9577-4cpfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f30cbaa3a", MAC:"1a:c8:18:70:83:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:17.790437 containerd[1640]: 2026-01-20 02:26:17.761 [INFO][5432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" Namespace="kube-system" Pod="coredns-66bc5c9577-4cpfh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--4cpfh-eth0" Jan 20 02:26:17.834392 kernel: audit: type=1300 audit(1768875977.563:596): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.957695 kernel: audit: type=1327 audit(1768875977.563:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.902960 systemd[1]: Started cri-containerd-70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34.scope - libcontainer container 70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34. Jan 20 02:26:18.085913 kernel: audit: type=1334 audit(1768875977.563:597): prog-id=180 op=LOAD Jan 20 02:26:18.086122 kernel: audit: type=1300 audit(1768875977.563:597): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.563000 audit: BPF prog-id=180 op=LOAD Jan 20 02:26:17.563000 audit[5663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:18.105865 systemd[1]: Started cri-containerd-a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672.scope - libcontainer container a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672. Jan 20 02:26:18.222812 kernel: audit: type=1327 audit(1768875977.563:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.566000 audit: BPF prog-id=181 op=LOAD Jan 20 02:26:17.566000 audit[5663]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.566000 audit: BPF prog-id=181 op=UNLOAD Jan 20 02:26:17.566000 audit[5663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.566000 audit: BPF prog-id=180 op=UNLOAD Jan 20 02:26:17.566000 audit[5663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:17.566000 audit: BPF prog-id=182 op=LOAD Jan 20 02:26:17.566000 audit[5663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5619 pid=5663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:17.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537383065663061343732663966613261663064633331623536383430 Jan 20 02:26:19.188000 audit: BPF prog-id=183 op=LOAD Jan 20 02:26:19.188000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd61e62ac0 a2=98 a3=1fffffffffffffff items=0 ppid=5466 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.188000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:26:19.188000 audit: BPF prog-id=183 op=UNLOAD Jan 20 02:26:19.188000 audit[5800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd61e62a90 a3=0 items=0 ppid=5466 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.188000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:26:19.188000 audit: BPF prog-id=184 op=LOAD Jan 20 02:26:19.188000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd61e629a0 a2=94 a3=3 items=0 ppid=5466 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.188000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:26:19.195000 audit: BPF prog-id=184 op=UNLOAD Jan 20 02:26:19.195000 audit[5800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd61e629a0 a2=94 a3=3 items=0 ppid=5466 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.195000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:26:19.195000 audit: BPF prog-id=185 op=LOAD Jan 20 02:26:19.195000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd61e629e0 a2=94 a3=7ffd61e62bc0 items=0 ppid=5466 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.195000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:26:19.195000 audit: BPF prog-id=185 op=UNLOAD Jan 20 02:26:19.195000 audit[5800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd61e629e0 a2=94 a3=7ffd61e62bc0 items=0 ppid=5466 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.195000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 02:26:19.358357 systemd-networkd[1538]: cali86f30cbaa3a: Gained IPv6LL Jan 20 02:26:19.366000 audit: BPF prog-id=186 op=LOAD Jan 20 02:26:19.412000 audit: BPF prog-id=187 op=LOAD Jan 20 02:26:19.418000 audit: BPF prog-id=188 op=LOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000164238 a2=98 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.418000 audit: BPF prog-id=188 op=UNLOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.418000 audit: BPF prog-id=189 op=LOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000164488 a2=98 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.418000 audit: BPF prog-id=190 op=LOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000164218 a2=98 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.418000 audit: BPF prog-id=190 op=UNLOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.418000 audit: BPF prog-id=189 op=UNLOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.418000 audit: BPF prog-id=191 op=LOAD Jan 20 02:26:19.418000 audit[5667]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001646e8 a2=98 a3=0 items=0 ppid=5645 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730623132626330613530646266333662326535333539353737333366 Jan 20 02:26:19.584000 audit: BPF prog-id=192 op=LOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.584000 audit: BPF prog-id=192 op=UNLOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.584000 audit: BPF prog-id=193 op=LOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.584000 audit: BPF prog-id=194 op=LOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.584000 audit: BPF prog-id=194 op=UNLOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.584000 audit: BPF prog-id=193 op=UNLOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.584000 audit: BPF prog-id=195 op=LOAD Jan 20 02:26:19.584000 audit[5717]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5687 pid=5717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132306438363266373334363938343139346232303133343738316266 Jan 20 02:26:19.618000 audit: BPF prog-id=196 op=LOAD Jan 20 02:26:19.618000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd791beab0 a2=98 a3=3 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:19.618000 audit: BPF prog-id=196 op=UNLOAD Jan 20 02:26:19.618000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd791bea80 a3=0 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:19.677000 audit: BPF prog-id=197 op=LOAD Jan 20 02:26:19.677000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd791be8a0 a2=94 a3=54428f items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:19.678000 audit: BPF prog-id=197 op=UNLOAD Jan 20 02:26:19.678000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd791be8a0 a2=94 a3=54428f items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.678000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:19.678000 audit: BPF prog-id=198 op=LOAD Jan 20 02:26:19.678000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd791be8d0 a2=94 a3=2 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.678000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:19.678000 audit: BPF prog-id=198 op=UNLOAD Jan 20 02:26:19.678000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd791be8d0 a2=0 a3=2 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:19.678000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:19.710769 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:19.983277 containerd[1640]: time="2026-01-20T02:26:19.940401894Z" level=info msg="connecting to shim 0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f" address="unix:///run/containerd/s/f684acbb388b8f87b8122804fe29d415d2b77234d12a7000236465583b867987" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:20.124251 containerd[1640]: time="2026-01-20T02:26:20.124197341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m48wx,Uid:7d7d1be6-3f16-4e7b-a636-54909266045f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191\"" Jan 20 02:26:20.127265 kubelet[3053]: E0120 02:26:20.127176 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:20.509759 containerd[1640]: time="2026-01-20T02:26:20.508977256Z" level=info msg="CreateContainer within sandbox \"e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:26:20.580608 systemd-networkd[1538]: cali305838951a4: Link UP Jan 20 02:26:20.580946 systemd-networkd[1538]: cali305838951a4: Gained carrier Jan 20 02:26:21.121036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609805260.mount: Deactivated successfully. Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:14.484 [INFO][5570] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:15.729 [INFO][5570] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0 calico-kube-controllers-746557d8fc- calico-system e572f9c2-ce5a-4d3c-956a-a140a15040fb 1223 0 2026-01-20 02:23:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:746557d8fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-746557d8fc-ztfh7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali305838951a4 [] [] }} ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:15.729 [INFO][5570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:17.791 [INFO][5676] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" HandleID="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Workload="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:17.791 [INFO][5676] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" HandleID="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Workload="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003999a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-746557d8fc-ztfh7", "timestamp":"2026-01-20 02:26:17.791074163 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:17.791 [INFO][5676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:17.792 [INFO][5676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:17.792 [INFO][5676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:18.093 [INFO][5676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:18.337 [INFO][5676] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:18.655 [INFO][5676] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:19.174 [INFO][5676] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:19.430 [INFO][5676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:19.430 [INFO][5676] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:19.630 [INFO][5676] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:19.960 [INFO][5676] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:20.172 [INFO][5676] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:20.172 [INFO][5676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" host="localhost" Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:20.173 [INFO][5676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:21.173235 containerd[1640]: 2026-01-20 02:26:20.173 [INFO][5676] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" HandleID="k8s-pod-network.20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Workload="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.219182 containerd[1640]: 2026-01-20 02:26:20.264 [INFO][5570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0", GenerateName:"calico-kube-controllers-746557d8fc-", Namespace:"calico-system", SelfLink:"", UID:"e572f9c2-ce5a-4d3c-956a-a140a15040fb", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746557d8fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-746557d8fc-ztfh7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali305838951a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:21.219182 containerd[1640]: 2026-01-20 02:26:20.309 [INFO][5570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.219182 containerd[1640]: 2026-01-20 02:26:20.309 [INFO][5570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali305838951a4 ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.219182 containerd[1640]: 2026-01-20 02:26:20.538 [INFO][5570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.219182 containerd[1640]: 2026-01-20 02:26:20.552 [INFO][5570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0", GenerateName:"calico-kube-controllers-746557d8fc-", Namespace:"calico-system", SelfLink:"", UID:"e572f9c2-ce5a-4d3c-956a-a140a15040fb", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746557d8fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f", Pod:"calico-kube-controllers-746557d8fc-ztfh7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali305838951a4", MAC:"92:ae:61:1a:37:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:21.219182 containerd[1640]: 2026-01-20 02:26:21.031 [INFO][5570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" Namespace="calico-system" Pod="calico-kube-controllers-746557d8fc-ztfh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746557d8fc--ztfh7-eth0" Jan 20 02:26:21.265640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664660382.mount: Deactivated successfully. Jan 20 02:26:21.335218 containerd[1640]: time="2026-01-20T02:26:21.317236063Z" level=info msg="Container 2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:26:21.391711 containerd[1640]: time="2026-01-20T02:26:21.391371053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tfwc7,Uid:4892884d-a213-4dd6-ab53-844c331ae6d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672\"" Jan 20 02:26:21.419496 containerd[1640]: time="2026-01-20T02:26:21.419277879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:26:21.568481 containerd[1640]: time="2026-01-20T02:26:21.568386474Z" level=info msg="CreateContainer within sandbox \"e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b\"" Jan 20 02:26:21.578775 containerd[1640]: time="2026-01-20T02:26:21.578377242Z" level=info msg="StartContainer for \"2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b\"" Jan 20 02:26:21.601021 containerd[1640]: time="2026-01-20T02:26:21.600245179Z" level=info msg="connecting to shim 2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b" address="unix:///run/containerd/s/75d17eccaf35dbf9ceaa4f9429e9c4d9d70914ac47af1548d3a1e6afaf16573b" protocol=ttrpc version=3 Jan 20 02:26:21.741918 containerd[1640]: time="2026-01-20T02:26:21.730085926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-749b857967-xt4pg,Uid:75bc6f23-38ce-4e96-aaf1-83d653850866,Namespace:calico-system,Attempt:0,} returns sandbox id \"70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34\"" Jan 20 02:26:21.961871 systemd[1]: Started cri-containerd-0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f.scope - libcontainer container 0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f. Jan 20 02:26:22.115141 containerd[1640]: time="2026-01-20T02:26:22.115086955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:22.145389 containerd[1640]: time="2026-01-20T02:26:22.145284349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:26:22.145672 containerd[1640]: time="2026-01-20T02:26:22.145436754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:22.179206 kubelet[3053]: E0120 02:26:22.178901 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:26:22.201787 kubelet[3053]: E0120 02:26:22.179660 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:26:22.201787 kubelet[3053]: E0120 02:26:22.180938 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:22.201787 kubelet[3053]: E0120 02:26:22.180994 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:26:22.201999 systemd[1]: Started cri-containerd-2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b.scope - libcontainer container 2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b. Jan 20 02:26:22.260288 systemd-networkd[1538]: cali305838951a4: Gained IPv6LL Jan 20 02:26:22.318191 containerd[1640]: time="2026-01-20T02:26:22.267964411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:26:22.832693 kernel: kauditd_printk_skb: 92 callbacks suppressed Jan 20 02:26:22.832843 kernel: audit: type=1334 audit(1768875982.795:630): prog-id=199 op=LOAD Jan 20 02:26:22.832886 kernel: audit: type=1334 audit(1768875982.809:631): prog-id=200 op=LOAD Jan 20 02:26:22.795000 audit: BPF prog-id=199 op=LOAD Jan 20 02:26:22.809000 audit: BPF prog-id=200 op=LOAD Jan 20 02:26:22.961916 kernel: audit: type=1300 audit(1768875982.809:631): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.962044 kernel: audit: type=1327 audit(1768875982.809:631): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.962092 kernel: audit: type=1334 audit(1768875982.809:632): prog-id=200 op=UNLOAD Jan 20 02:26:22.962126 kernel: audit: type=1300 audit(1768875982.809:632): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.962162 kernel: audit: type=1327 audit(1768875982.809:632): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.962220 kernel: audit: type=1334 audit(1768875982.809:633): prog-id=201 op=LOAD Jan 20 02:26:22.962247 kernel: audit: type=1300 audit(1768875982.809:633): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.966400 kernel: audit: type=1327 audit(1768875982.809:633): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit: BPF prog-id=200 op=UNLOAD Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit: BPF prog-id=201 op=LOAD Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit: BPF prog-id=202 op=LOAD Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit: BPF prog-id=202 op=UNLOAD Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit: BPF prog-id=201 op=UNLOAD Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.809000 audit: BPF prog-id=203 op=LOAD Jan 20 02:26:22.809000 audit[5837]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5818 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:22.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065626439623330333036353963373130313438313262333866653836 Jan 20 02:26:22.879594 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:22.881973 systemd-networkd[1538]: cali8857db02855: Link UP Jan 20 02:26:22.882381 systemd-networkd[1538]: cali8857db02855: Gained carrier Jan 20 02:26:23.049604 kubelet[3053]: E0120 02:26:23.040005 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:26:23.082000 audit: BPF prog-id=204 op=LOAD Jan 20 02:26:23.088000 audit: BPF prog-id=205 op=LOAD Jan 20 02:26:23.088000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.088000 audit: BPF prog-id=205 op=UNLOAD Jan 20 02:26:23.088000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.088000 audit: BPF prog-id=206 op=LOAD Jan 20 02:26:23.088000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.088000 audit: BPF prog-id=207 op=LOAD Jan 20 02:26:23.088000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.096000 audit: BPF prog-id=207 op=UNLOAD Jan 20 02:26:23.096000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.096000 audit: BPF prog-id=206 op=UNLOAD Jan 20 02:26:23.096000 audit[5887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.096000 audit: BPF prog-id=208 op=LOAD Jan 20 02:26:23.096000 audit[5887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5619 pid=5887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235383965643438613831613339366334303866383733303161353032 Jan 20 02:26:23.158640 containerd[1640]: time="2026-01-20T02:26:23.152961308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:23.210390 containerd[1640]: time="2026-01-20T02:26:23.206674694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:26:23.210390 containerd[1640]: time="2026-01-20T02:26:23.206825506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:23.332363 kubelet[3053]: E0120 02:26:23.332247 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:26:23.332363 kubelet[3053]: E0120 02:26:23.332326 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:26:23.351682 kubelet[3053]: E0120 02:26:23.351344 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:23.469800 containerd[1640]: time="2026-01-20T02:26:23.461366083Z" level=info msg="connecting to shim 20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" address="unix:///run/containerd/s/a0dbec9a111f414fe0999998804ef1cf7eca113db7a743c8c7dd8cc9e404b896" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:15.417 [INFO][5563] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:16.441 [INFO][5563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0 calico-apiserver-764db5c9d9- calico-apiserver ca9f2980-346b-4927-8985-9cb6081e02db 1220 0 2026-01-20 02:23:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:764db5c9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-764db5c9d9-r829f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8857db02855 [] [] }} ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:16.441 [INFO][5563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:18.868 [INFO][5712] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" HandleID="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Workload="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:18.868 [INFO][5712] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" HandleID="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Workload="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026e5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-764db5c9d9-r829f", "timestamp":"2026-01-20 02:26:18.868038581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:18.868 [INFO][5712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:20.184 [INFO][5712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:20.184 [INFO][5712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:20.302 [INFO][5712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:20.592 [INFO][5712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:21.125 [INFO][5712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:21.317 [INFO][5712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:21.424 [INFO][5712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:21.425 [INFO][5712] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:21.467 [INFO][5712] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:21.810 [INFO][5712] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:22.027 [INFO][5712] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:22.027 [INFO][5712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" host="localhost" Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:22.036 [INFO][5712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:23.541040 containerd[1640]: 2026-01-20 02:26:22.053 [INFO][5712] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" HandleID="k8s-pod-network.3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Workload="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.547959 containerd[1640]: 2026-01-20 02:26:22.120 [INFO][5563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0", GenerateName:"calico-apiserver-764db5c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca9f2980-346b-4927-8985-9cb6081e02db", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"764db5c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-764db5c9d9-r829f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8857db02855", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:23.547959 containerd[1640]: 2026-01-20 02:26:22.120 [INFO][5563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.547959 containerd[1640]: 2026-01-20 02:26:22.120 [INFO][5563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8857db02855 ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.547959 containerd[1640]: 2026-01-20 02:26:22.994 [INFO][5563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.547959 containerd[1640]: 2026-01-20 02:26:22.995 [INFO][5563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0", GenerateName:"calico-apiserver-764db5c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca9f2980-346b-4927-8985-9cb6081e02db", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"764db5c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd", Pod:"calico-apiserver-764db5c9d9-r829f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8857db02855", MAC:"e2:de:07:d2:1f:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:23.547959 containerd[1640]: 2026-01-20 02:26:23.385 [INFO][5563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-r829f" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--r829f-eth0" Jan 20 02:26:23.567000 audit[5930]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:23.567000 audit[5930]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2c5b3e40 a2=0 a3=7fff2c5b3e2c items=0 ppid=3166 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:23.856000 audit[5930]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:23.856000 audit[5930]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff2c5b3e40 a2=0 a3=0 items=0 ppid=3166 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:23.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:23.902117 containerd[1640]: time="2026-01-20T02:26:23.896393596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:26:24.319886 kubelet[3053]: E0120 02:26:24.294885 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:26:24.387958 containerd[1640]: time="2026-01-20T02:26:24.369776474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:24.579119 containerd[1640]: time="2026-01-20T02:26:24.557574200Z" level=info msg="StartContainer for \"2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b\" returns successfully" Jan 20 02:26:24.579119 containerd[1640]: time="2026-01-20T02:26:24.573870078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:26:24.579119 containerd[1640]: time="2026-01-20T02:26:24.574024598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:24.593700 kubelet[3053]: E0120 02:26:24.591865 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:26:24.596649 kubelet[3053]: E0120 02:26:24.594462 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:26:24.596649 kubelet[3053]: E0120 02:26:24.594689 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:24.596649 kubelet[3053]: E0120 02:26:24.594750 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:26:24.745188 containerd[1640]: time="2026-01-20T02:26:24.737713886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4cpfh,Uid:bc4468de-9eba-48b5-88c5-c38dc3f08d39,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f\"" Jan 20 02:26:24.745394 kubelet[3053]: E0120 02:26:24.738882 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:24.837313 containerd[1640]: time="2026-01-20T02:26:24.835103914Z" level=info msg="CreateContainer within sandbox \"0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 02:26:24.864756 systemd-networkd[1538]: cali8857db02855: Gained IPv6LL Jan 20 02:26:24.930000 audit: BPF prog-id=209 op=LOAD Jan 20 02:26:24.930000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd791be790 a2=94 a3=1 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.930000 audit: BPF prog-id=209 op=UNLOAD Jan 20 02:26:24.930000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd791be790 a2=94 a3=1 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=210 op=LOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd791be780 a2=94 a3=4 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=210 op=UNLOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd791be780 a2=0 a3=4 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=211 op=LOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd791be5e0 a2=94 a3=5 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=211 op=UNLOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd791be5e0 a2=0 a3=5 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=212 op=LOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd791be800 a2=94 a3=6 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=212 op=UNLOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd791be800 a2=0 a3=6 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.945000 audit: BPF prog-id=213 op=LOAD Jan 20 02:26:24.945000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd791bdfb0 a2=94 a3=88 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.948000 audit: BPF prog-id=214 op=LOAD Jan 20 02:26:24.948000 audit[5815]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd791bde30 a2=94 a3=2 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.948000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.948000 audit: BPF prog-id=214 op=UNLOAD Jan 20 02:26:24.948000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd791bde60 a2=0 a3=7ffd791bdf60 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.948000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.948000 audit: BPF prog-id=213 op=UNLOAD Jan 20 02:26:24.948000 audit[5815]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=29afed10 a2=0 a3=5c911decb48ed707 items=0 ppid=5466 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:24.948000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 02:26:24.997220 containerd[1640]: time="2026-01-20T02:26:24.997111392Z" level=info msg="connecting to shim 3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd" address="unix:///run/containerd/s/da82e0e9d619821fabfc9103c8cacf4ed0e7c8dad27003864b1d3c717d089a4a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:25.012353 systemd[1]: Started cri-containerd-20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f.scope - libcontainer container 20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f. Jan 20 02:26:25.114011 containerd[1640]: time="2026-01-20T02:26:25.113854649Z" level=info msg="Container d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:26:25.124376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933949935.mount: Deactivated successfully. Jan 20 02:26:25.180158 systemd-networkd[1538]: cali8d8a105b324: Link UP Jan 20 02:26:25.201477 systemd-networkd[1538]: cali8d8a105b324: Gained carrier Jan 20 02:26:25.285592 kubelet[3053]: E0120 02:26:25.283361 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:25.288594 containerd[1640]: time="2026-01-20T02:26:25.288446044Z" level=info msg="CreateContainer within sandbox \"0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477\"" Jan 20 02:26:25.316989 containerd[1640]: time="2026-01-20T02:26:25.315268256Z" level=info msg="StartContainer for \"d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477\"" Jan 20 02:26:25.331709 containerd[1640]: time="2026-01-20T02:26:25.331057244Z" level=info msg="connecting to shim d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477" address="unix:///run/containerd/s/f684acbb388b8f87b8122804fe29d415d2b77234d12a7000236465583b867987" protocol=ttrpc version=3 Jan 20 02:26:25.410000 audit: BPF prog-id=215 op=LOAD Jan 20 02:26:25.410000 audit[6016]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9727a340 a2=98 a3=1999999999999999 items=0 ppid=5466 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.410000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:26:25.411000 audit: BPF prog-id=215 op=UNLOAD Jan 20 02:26:25.411000 audit[6016]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc9727a310 a3=0 items=0 ppid=5466 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.411000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:26:25.411000 audit: BPF prog-id=216 op=LOAD Jan 20 02:26:25.411000 audit[6016]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9727a220 a2=94 a3=ffff items=0 ppid=5466 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.411000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:26:25.411000 audit: BPF prog-id=216 op=UNLOAD Jan 20 02:26:25.411000 audit[6016]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc9727a220 a2=94 a3=ffff items=0 ppid=5466 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.411000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:26:25.411000 audit: BPF prog-id=217 op=LOAD Jan 20 02:26:25.411000 audit[6016]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9727a260 a2=94 a3=7ffc9727a440 items=0 ppid=5466 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.411000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:26:25.411000 audit: BPF prog-id=217 op=UNLOAD Jan 20 02:26:25.411000 audit[6016]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc9727a260 a2=94 a3=7ffc9727a440 items=0 ppid=5466 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.411000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:17.616 [INFO][5720] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:18.273 [INFO][5720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9lglv-eth0 csi-node-driver- calico-system 797382c1-6a9f-48bd-be88-5e85feeef509 967 0 2026-01-20 02:23:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9lglv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8d8a105b324 [] [] }} ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:18.273 [INFO][5720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:20.108 [INFO][5787] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" HandleID="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Workload="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:20.109 [INFO][5787] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" HandleID="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Workload="localhost-k8s-csi--node--driver--9lglv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9lglv", "timestamp":"2026-01-20 02:26:20.108868495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:20.109 [INFO][5787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:22.051 [INFO][5787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:22.051 [INFO][5787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:22.193 [INFO][5787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:22.874 [INFO][5787] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:23.654 [INFO][5787] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:23.726 [INFO][5787] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.278 [INFO][5787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.279 [INFO][5787] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.457 [INFO][5787] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81 Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.584 [INFO][5787] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.983 [INFO][5787] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.983 [INFO][5787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" host="localhost" Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.983 [INFO][5787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:25.514849 containerd[1640]: 2026-01-20 02:26:24.983 [INFO][5787] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" HandleID="k8s-pod-network.8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Workload="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.517302 containerd[1640]: 2026-01-20 02:26:25.122 [INFO][5720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9lglv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"797382c1-6a9f-48bd-be88-5e85feeef509", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9lglv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d8a105b324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:25.517302 containerd[1640]: 2026-01-20 02:26:25.122 [INFO][5720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.517302 containerd[1640]: 2026-01-20 02:26:25.123 [INFO][5720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d8a105b324 ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.517302 containerd[1640]: 2026-01-20 02:26:25.204 [INFO][5720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.517302 containerd[1640]: 2026-01-20 02:26:25.205 [INFO][5720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9lglv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"797382c1-6a9f-48bd-be88-5e85feeef509", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81", Pod:"csi-node-driver-9lglv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d8a105b324", MAC:"ea:9d:8a:88:aa:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:25.517302 containerd[1640]: 2026-01-20 02:26:25.329 [INFO][5720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" Namespace="calico-system" Pod="csi-node-driver-9lglv" WorkloadEndpoint="localhost-k8s-csi--node--driver--9lglv-eth0" Jan 20 02:26:25.530581 kubelet[3053]: E0120 02:26:25.521317 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:26:25.616000 audit: BPF prog-id=218 op=LOAD Jan 20 02:26:25.650000 audit: BPF prog-id=219 op=LOAD Jan 20 02:26:25.650000 audit[5957]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.650000 audit: BPF prog-id=219 op=UNLOAD Jan 20 02:26:25.650000 audit[5957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.650000 audit: BPF prog-id=220 op=LOAD Jan 20 02:26:25.650000 audit[5957]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.650000 audit: BPF prog-id=221 op=LOAD Jan 20 02:26:25.650000 audit[5957]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.651000 audit: BPF prog-id=221 op=UNLOAD Jan 20 02:26:25.651000 audit[5957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.651000 audit: BPF prog-id=220 op=UNLOAD Jan 20 02:26:25.651000 audit[5957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.651000 audit: BPF prog-id=222 op=LOAD Jan 20 02:26:25.651000 audit[5957]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=5928 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:25.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230623231663363343062623137366564343730626232633936393139 Jan 20 02:26:25.721968 kubelet[3053]: I0120 02:26:25.720090 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m48wx" podStartSLOduration=275.720065574 podStartE2EDuration="4m35.720065574s" podCreationTimestamp="2026-01-20 02:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:26:25.498394355 +0000 UTC m=+276.397036564" watchObservedRunningTime="2026-01-20 02:26:25.720065574 +0000 UTC m=+276.618707774" Jan 20 02:26:25.723896 systemd[1]: Started cri-containerd-3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd.scope - libcontainer container 3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd. Jan 20 02:26:25.844478 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:25.905285 systemd[1]: Started cri-containerd-d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477.scope - libcontainer container d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477. Jan 20 02:26:26.153720 containerd[1640]: time="2026-01-20T02:26:26.153478897Z" level=info msg="connecting to shim 8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81" address="unix:///run/containerd/s/cc708dd38852ff1ce790c4371c6e7ace69407413761955bac78d9e18bd5cf0ef" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:26.062000 audit[6072]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=6072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:26.062000 audit[6072]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedd5a0280 a2=0 a3=7ffedd5a026c items=0 ppid=3166 pid=6072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.062000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:26.321000 audit[6072]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=6072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:26.321000 audit[6072]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffedd5a0280 a2=0 a3=0 items=0 ppid=3166 pid=6072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.321000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:26.329000 audit: BPF prog-id=223 op=LOAD Jan 20 02:26:26.391000 audit: BPF prog-id=224 op=LOAD Jan 20 02:26:26.391000 audit[6017]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.398000 audit: BPF prog-id=224 op=UNLOAD Jan 20 02:26:26.398000 audit[6017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.398000 audit: BPF prog-id=225 op=LOAD Jan 20 02:26:26.398000 audit[6017]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.410000 audit: BPF prog-id=226 op=LOAD Jan 20 02:26:26.410000 audit[6017]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.418000 audit: BPF prog-id=226 op=UNLOAD Jan 20 02:26:26.418000 audit[6017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.418000 audit: BPF prog-id=225 op=UNLOAD Jan 20 02:26:26.418000 audit[6017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.418000 audit: BPF prog-id=227 op=LOAD Jan 20 02:26:26.418000 audit[6017]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5818 pid=6017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:26.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437393266643161346134623839656238313462643131633037313866 Jan 20 02:26:26.474478 kubelet[3053]: E0120 02:26:26.474397 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:26.519956 systemd-networkd[1538]: cali8d8a105b324: Gained IPv6LL Jan 20 02:26:26.959420 systemd[1]: Started cri-containerd-8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81.scope - libcontainer container 8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81. Jan 20 02:26:26.999000 audit: BPF prog-id=228 op=LOAD Jan 20 02:26:27.039565 containerd[1640]: time="2026-01-20T02:26:27.032980409Z" level=error msg="get state for 20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f" error="context deadline exceeded" Jan 20 02:26:27.039565 containerd[1640]: time="2026-01-20T02:26:27.033058084Z" level=warning msg="unknown status" status=0 Jan 20 02:26:27.067000 audit: BPF prog-id=229 op=LOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194238 a2=98 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.067000 audit: BPF prog-id=229 op=UNLOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.067000 audit: BPF prog-id=230 op=LOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194488 a2=98 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.067000 audit: BPF prog-id=231 op=LOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000194218 a2=98 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.067000 audit: BPF prog-id=231 op=UNLOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.067000 audit: BPF prog-id=230 op=UNLOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.067000 audit: BPF prog-id=232 op=LOAD Jan 20 02:26:27.067000 audit[6007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001946e8 a2=98 a3=0 items=0 ppid=5986 pid=6007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:27.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362346439346539363634356139626135323461386637636465336164 Jan 20 02:26:27.118239 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:27.486034 containerd[1640]: time="2026-01-20T02:26:27.474136379Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 02:26:27.860078 containerd[1640]: time="2026-01-20T02:26:27.849632473Z" level=info msg="StartContainer for \"d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477\" returns successfully" Jan 20 02:26:28.060359 kernel: kauditd_printk_skb: 166 callbacks suppressed Jan 20 02:26:28.062234 kernel: audit: type=1334 audit(1768875988.011:692): prog-id=233 op=LOAD Jan 20 02:26:28.011000 audit: BPF prog-id=233 op=LOAD Jan 20 02:26:28.069000 audit: BPF prog-id=234 op=LOAD Jan 20 02:26:28.069000 audit[6099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.111407 kernel: audit: type=1334 audit(1768875988.069:693): prog-id=234 op=LOAD Jan 20 02:26:28.111634 kernel: audit: type=1300 audit(1768875988.069:693): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.111719 kernel: audit: type=1327 audit(1768875988.069:693): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.069000 audit: BPF prog-id=234 op=UNLOAD Jan 20 02:26:28.226028 kernel: audit: type=1334 audit(1768875988.069:694): prog-id=234 op=UNLOAD Jan 20 02:26:28.226202 kernel: audit: type=1300 audit(1768875988.069:694): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.069000 audit[6099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.226404 containerd[1640]: time="2026-01-20T02:26:28.218058785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746557d8fc-ztfh7,Uid:e572f9c2-ce5a-4d3c-956a-a140a15040fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f\"" Jan 20 02:26:28.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.296400 kernel: audit: type=1327 audit(1768875988.069:694): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.375736 kernel: audit: type=1334 audit(1768875988.071:695): prog-id=235 op=LOAD Jan 20 02:26:28.375931 kernel: audit: type=1300 audit(1768875988.071:695): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.071000 audit: BPF prog-id=235 op=LOAD Jan 20 02:26:28.071000 audit[6099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.377053 containerd[1640]: time="2026-01-20T02:26:28.351702430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:26:28.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.393932 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:28.071000 audit: BPF prog-id=236 op=LOAD Jan 20 02:26:28.071000 audit[6099]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.071000 audit: BPF prog-id=236 op=UNLOAD Jan 20 02:26:28.071000 audit[6099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.071000 audit: BPF prog-id=235 op=UNLOAD Jan 20 02:26:28.071000 audit[6099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.435635 kernel: audit: type=1327 audit(1768875988.071:695): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.071000 audit: BPF prog-id=237 op=LOAD Jan 20 02:26:28.071000 audit[6099]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6078 pid=6099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:28.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866633337633930386335613762346332623663373365386233333266 Jan 20 02:26:28.598054 containerd[1640]: time="2026-01-20T02:26:28.597968940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:28.612656 containerd[1640]: time="2026-01-20T02:26:28.610888771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:26:28.612656 containerd[1640]: time="2026-01-20T02:26:28.611052787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:28.620118 kubelet[3053]: E0120 02:26:28.616937 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:26:28.620118 kubelet[3053]: E0120 02:26:28.617022 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:26:28.620118 kubelet[3053]: E0120 02:26:28.617119 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:28.620118 kubelet[3053]: E0120 02:26:28.617167 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:26:28.760704 containerd[1640]: time="2026-01-20T02:26:28.752723902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-r829f,Uid:ca9f2980-346b-4927-8985-9cb6081e02db,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd\"" Jan 20 02:26:28.818136 containerd[1640]: time="2026-01-20T02:26:28.817098060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:26:29.080241 containerd[1640]: time="2026-01-20T02:26:29.063757026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:29.152000 audit[6165]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=6165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:29.152000 audit[6165]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd4c083810 a2=0 a3=7ffd4c0837fc items=0 ppid=3166 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:29.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:29.176618 containerd[1640]: time="2026-01-20T02:26:29.175009451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:26:29.176618 containerd[1640]: time="2026-01-20T02:26:29.175151877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:29.182057 kubelet[3053]: E0120 02:26:29.177901 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:29.182057 kubelet[3053]: E0120 02:26:29.178364 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:29.182057 kubelet[3053]: E0120 02:26:29.178455 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:29.182057 kubelet[3053]: E0120 02:26:29.178497 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:26:29.199293 kubelet[3053]: E0120 02:26:29.199249 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:29.269582 kubelet[3053]: E0120 02:26:29.269209 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:26:29.389439 systemd-networkd[1538]: cali74400b0092c: Link UP Jan 20 02:26:29.390643 systemd-networkd[1538]: cali74400b0092c: Gained carrier Jan 20 02:26:29.447000 audit[6165]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=6165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:29.447000 audit[6165]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd4c083810 a2=0 a3=0 items=0 ppid=3166 pid=6165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:29.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:29.541994 systemd-networkd[1538]: vxlan.calico: Link UP Jan 20 02:26:29.542010 systemd-networkd[1538]: vxlan.calico: Gained carrier Jan 20 02:26:29.768475 kubelet[3053]: I0120 02:26:29.740409 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4cpfh" podStartSLOduration=279.740390871 podStartE2EDuration="4m39.740390871s" podCreationTimestamp="2026-01-20 02:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 02:26:29.668037934 +0000 UTC m=+280.566680143" watchObservedRunningTime="2026-01-20 02:26:29.740390871 +0000 UTC m=+280.639033070" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:26.893 [INFO][6050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0 calico-apiserver-764db5c9d9- calico-apiserver 4d193768-31ad-4962-ae34-80e85c7499df 1224 0 2026-01-20 02:23:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:764db5c9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-764db5c9d9-v64bv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74400b0092c [] [] }} ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:26.903 [INFO][6050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:27.682 [INFO][6117] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" HandleID="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Workload="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:27.689 [INFO][6117] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" HandleID="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Workload="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba1e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-764db5c9d9-v64bv", "timestamp":"2026-01-20 02:26:27.682422909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:27.689 [INFO][6117] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:27.689 [INFO][6117] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:27.689 [INFO][6117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.052 [INFO][6117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.182 [INFO][6117] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.487 [INFO][6117] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.623 [INFO][6117] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.785 [INFO][6117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.825 [INFO][6117] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:28.951 [INFO][6117] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:29.137 [INFO][6117] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:29.271 [INFO][6117] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:29.271 [INFO][6117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" host="localhost" Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:29.271 [INFO][6117] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 02:26:30.207490 containerd[1640]: 2026-01-20 02:26:29.271 [INFO][6117] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" HandleID="k8s-pod-network.933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Workload="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.209051 containerd[1640]: 2026-01-20 02:26:29.370 [INFO][6050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0", GenerateName:"calico-apiserver-764db5c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d193768-31ad-4962-ae34-80e85c7499df", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"764db5c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-764db5c9d9-v64bv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74400b0092c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:30.209051 containerd[1640]: 2026-01-20 02:26:29.370 [INFO][6050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.209051 containerd[1640]: 2026-01-20 02:26:29.370 [INFO][6050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74400b0092c ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.209051 containerd[1640]: 2026-01-20 02:26:29.383 [INFO][6050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.209051 containerd[1640]: 2026-01-20 02:26:29.410 [INFO][6050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0", GenerateName:"calico-apiserver-764db5c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d193768-31ad-4962-ae34-80e85c7499df", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 2, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"764db5c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d", Pod:"calico-apiserver-764db5c9d9-v64bv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74400b0092c", MAC:"5a:9f:37:45:6b:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 02:26:30.209051 containerd[1640]: 2026-01-20 02:26:29.824 [INFO][6050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" Namespace="calico-apiserver" Pod="calico-apiserver-764db5c9d9-v64bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--764db5c9d9--v64bv-eth0" Jan 20 02:26:30.302848 kubelet[3053]: E0120 02:26:30.295361 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:30.332729 kubelet[3053]: E0120 02:26:30.329443 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:26:30.332729 kubelet[3053]: E0120 02:26:30.330204 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:26:30.618748 containerd[1640]: time="2026-01-20T02:26:30.618421025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9lglv,Uid:797382c1-6a9f-48bd-be88-5e85feeef509,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81\"" Jan 20 02:26:30.734000 audit[6187]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=6187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:30.734000 audit[6187]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2d3a1270 a2=0 a3=7ffc2d3a125c items=0 ppid=3166 pid=6187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:30.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:30.760000 audit[6187]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=6187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:30.760000 audit[6187]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc2d3a1270 a2=0 a3=0 items=0 ppid=3166 pid=6187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:30.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:30.780831 containerd[1640]: time="2026-01-20T02:26:30.775432028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:26:31.057290 containerd[1640]: time="2026-01-20T02:26:31.057034292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:31.075983 containerd[1640]: time="2026-01-20T02:26:31.073616844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:26:31.076385 containerd[1640]: time="2026-01-20T02:26:31.076283704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:31.077602 systemd-networkd[1538]: cali74400b0092c: Gained IPv6LL Jan 20 02:26:31.088217 kubelet[3053]: E0120 02:26:31.086171 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:26:31.088217 kubelet[3053]: E0120 02:26:31.086258 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:26:31.088217 kubelet[3053]: E0120 02:26:31.086353 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:31.187213 containerd[1640]: time="2026-01-20T02:26:31.183667297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:26:31.368000 audit: BPF prog-id=238 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3cc2ac10 a2=98 a3=0 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=238 op=UNLOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff3cc2abe0 a3=0 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=239 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3cc2aa20 a2=94 a3=54428f items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=239 op=UNLOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff3cc2aa20 a2=94 a3=54428f items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=240 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3cc2aa50 a2=94 a3=2 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=240 op=UNLOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff3cc2aa50 a2=0 a3=2 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=241 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3cc2a800 a2=94 a3=4 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=241 op=UNLOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff3cc2a800 a2=94 a3=4 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=242 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3cc2a900 a2=94 a3=7fff3cc2aa80 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=242 op=UNLOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff3cc2a900 a2=0 a3=7fff3cc2aa80 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=243 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3cc2a030 a2=94 a3=2 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=243 op=UNLOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff3cc2a030 a2=0 a3=2 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.368000 audit: BPF prog-id=244 op=LOAD Jan 20 02:26:31.368000 audit[6207]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3cc2a130 a2=94 a3=30 items=0 ppid=5466 pid=6207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 02:26:31.587290 kubelet[3053]: E0120 02:26:31.580466 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:31.613099 containerd[1640]: time="2026-01-20T02:26:31.613032535Z" level=info msg="connecting to shim 933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d" address="unix:///run/containerd/s/66bdc3b23114bca043339da11e3c15b8065b8b37ca6bfce15e9ccc81ac9af3aa" namespace=k8s.io protocol=ttrpc version=3 Jan 20 02:26:31.640194 containerd[1640]: time="2026-01-20T02:26:31.639889161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:31.658570 containerd[1640]: time="2026-01-20T02:26:31.658030821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:26:31.670233 containerd[1640]: time="2026-01-20T02:26:31.659409141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:31.672762 kubelet[3053]: E0120 02:26:31.672599 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:26:31.673131 kubelet[3053]: E0120 02:26:31.673004 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:26:31.697638 kubelet[3053]: E0120 02:26:31.697586 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:31.715500 kubelet[3053]: E0120 02:26:31.715193 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:26:31.911000 audit: BPF prog-id=245 op=LOAD Jan 20 02:26:31.911000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff7b7c300 a2=98 a3=0 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:31.911000 audit: BPF prog-id=245 op=UNLOAD Jan 20 02:26:31.911000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff7b7c2d0 a3=0 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:31.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:31.910441 systemd-networkd[1538]: vxlan.calico: Gained IPv6LL Jan 20 02:26:32.129000 audit[6233]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=6233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:32.129000 audit[6233]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8ae6ea90 a2=0 a3=7ffc8ae6ea7c items=0 ppid=3166 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:32.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:32.157000 audit: BPF prog-id=246 op=LOAD Jan 20 02:26:32.157000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff7b7c0f0 a2=94 a3=54428f items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:32.157000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:32.157000 audit: BPF prog-id=246 op=UNLOAD Jan 20 02:26:32.157000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff7b7c0f0 a2=94 a3=54428f items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:32.157000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:32.157000 audit: BPF prog-id=247 op=LOAD Jan 20 02:26:32.157000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff7b7c120 a2=94 a3=2 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:32.157000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:32.158000 audit: BPF prog-id=247 op=UNLOAD Jan 20 02:26:32.158000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff7b7c120 a2=0 a3=2 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:32.158000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:32.420000 audit[6233]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=6233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:32.420000 audit[6233]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8ae6ea90 a2=0 a3=7ffc8ae6ea7c items=0 ppid=3166 pid=6233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:32.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:32.682120 kubelet[3053]: E0120 02:26:32.673412 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:26:32.992400 systemd[1]: Started cri-containerd-933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d.scope - libcontainer container 933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d. Jan 20 02:26:33.450000 audit: BPF prog-id=248 op=LOAD Jan 20 02:26:33.479242 kernel: kauditd_printk_skb: 87 callbacks suppressed Jan 20 02:26:33.480608 kernel: audit: type=1334 audit(1768875993.450:725): prog-id=248 op=LOAD Jan 20 02:26:33.450000 audit: BPF prog-id=249 op=LOAD Jan 20 02:26:33.503807 kernel: audit: type=1334 audit(1768875993.450:726): prog-id=249 op=LOAD Jan 20 02:26:33.600198 kernel: audit: type=1300 audit(1768875993.450:726): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168238 a2=98 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.600349 kernel: audit: type=1327 audit(1768875993.450:726): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.450000 audit[6235]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168238 a2=98 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.558915 systemd-resolved[1292]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 02:26:33.450000 audit: BPF prog-id=249 op=UNLOAD Jan 20 02:26:33.450000 audit[6235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.763656 kernel: audit: type=1334 audit(1768875993.450:727): prog-id=249 op=UNLOAD Jan 20 02:26:33.763866 kernel: audit: type=1300 audit(1768875993.450:727): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.877609 kernel: audit: type=1327 audit(1768875993.450:727): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.877844 kernel: audit: type=1334 audit(1768875993.450:728): prog-id=250 op=LOAD Jan 20 02:26:33.450000 audit: BPF prog-id=250 op=LOAD Jan 20 02:26:33.887979 kernel: audit: type=1300 audit(1768875993.450:728): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168488 a2=98 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.450000 audit[6235]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000168488 a2=98 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:34.026342 containerd[1640]: time="2026-01-20T02:26:34.026151694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-764db5c9d9-v64bv,Uid:4d193768-31ad-4962-ae34-80e85c7499df,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d\"" Jan 20 02:26:34.051105 kernel: audit: type=1327 audit(1768875993.450:728): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.450000 audit: BPF prog-id=251 op=LOAD Jan 20 02:26:33.450000 audit[6235]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000168218 a2=98 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.462000 audit: BPF prog-id=251 op=UNLOAD Jan 20 02:26:33.462000 audit[6235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.462000 audit: BPF prog-id=250 op=UNLOAD Jan 20 02:26:33.462000 audit[6235]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:33.462000 audit: BPF prog-id=252 op=LOAD Jan 20 02:26:33.462000 audit[6235]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001686e8 a2=98 a3=0 items=0 ppid=6210 pid=6235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:33.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933336531313432363262383363316565363961653735623736393539 Jan 20 02:26:34.092326 containerd[1640]: time="2026-01-20T02:26:34.088421301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:26:34.192000 audit: BPF prog-id=253 op=LOAD Jan 20 02:26:34.192000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff7b7bfe0 a2=94 a3=1 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.192000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.192000 audit: BPF prog-id=253 op=UNLOAD Jan 20 02:26:34.192000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff7b7bfe0 a2=94 a3=1 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.192000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.231472 containerd[1640]: time="2026-01-20T02:26:34.231217953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:34.255201 containerd[1640]: time="2026-01-20T02:26:34.241868160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:26:34.255201 containerd[1640]: time="2026-01-20T02:26:34.242033680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:34.255450 kubelet[3053]: E0120 02:26:34.248689 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:34.255450 kubelet[3053]: E0120 02:26:34.248785 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:34.255450 kubelet[3053]: E0120 02:26:34.248883 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:34.255450 kubelet[3053]: E0120 02:26:34.248926 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:26:34.323000 audit: BPF prog-id=254 op=LOAD Jan 20 02:26:34.323000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff7b7bfd0 a2=94 a3=4 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.323000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.325000 audit: BPF prog-id=254 op=UNLOAD Jan 20 02:26:34.325000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff7b7bfd0 a2=0 a3=4 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.325000 audit: BPF prog-id=255 op=LOAD Jan 20 02:26:34.325000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff7b7be30 a2=94 a3=5 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.325000 audit: BPF prog-id=255 op=UNLOAD Jan 20 02:26:34.325000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff7b7be30 a2=0 a3=5 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.328000 audit: BPF prog-id=256 op=LOAD Jan 20 02:26:34.328000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff7b7c050 a2=94 a3=6 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.328000 audit: BPF prog-id=256 op=UNLOAD Jan 20 02:26:34.328000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff7b7c050 a2=0 a3=6 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.328000 audit: BPF prog-id=257 op=LOAD Jan 20 02:26:34.328000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff7b7b800 a2=94 a3=88 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.328000 audit: BPF prog-id=258 op=LOAD Jan 20 02:26:34.328000 audit[6231]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffff7b7b680 a2=94 a3=2 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.328000 audit: BPF prog-id=258 op=UNLOAD Jan 20 02:26:34.328000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffff7b7b6b0 a2=0 a3=7ffff7b7b7b0 items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.334000 audit: BPF prog-id=257 op=UNLOAD Jan 20 02:26:34.334000 audit[6231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=39825d10 a2=0 a3=1d48a362f5c714e items=0 ppid=5466 pid=6231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.334000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 02:26:34.499000 audit: BPF prog-id=244 op=UNLOAD Jan 20 02:26:34.499000 audit[5466]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001032100 a2=0 a3=0 items=0 ppid=5458 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:34.499000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 02:26:34.699998 kubelet[3053]: E0120 02:26:34.698223 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:26:35.185000 audit[6268]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=6268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:35.185000 audit[6268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff78d3d2f0 a2=0 a3=7fff78d3d2dc items=0 ppid=3166 pid=6268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:35.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:35.226000 audit[6268]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=6268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:35.226000 audit[6268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff78d3d2f0 a2=0 a3=7fff78d3d2dc items=0 ppid=3166 pid=6268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:35.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:35.749574 kubelet[3053]: E0120 02:26:35.747802 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:26:36.492333 kubelet[3053]: E0120 02:26:36.491705 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:36.529000 audit[6294]: NETFILTER_CFG table=nat:131 family=2 entries=15 op=nft_register_chain pid=6294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:26:36.529000 audit[6294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd8f60a6e0 a2=0 a3=7ffd8f60a6cc items=0 ppid=5466 pid=6294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:36.529000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:26:36.715000 audit[6293]: NETFILTER_CFG table=mangle:132 family=2 entries=16 op=nft_register_chain pid=6293 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:26:36.715000 audit[6293]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff1baf4660 a2=0 a3=7fff1baf464c items=0 ppid=5466 pid=6293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:36.715000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:26:37.014000 audit[6292]: NETFILTER_CFG table=raw:133 family=2 entries=21 op=nft_register_chain pid=6292 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:26:37.014000 audit[6292]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdfed9b2a0 a2=0 a3=7ffdfed9b28c items=0 ppid=5466 pid=6292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:37.014000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:26:37.048000 audit[6301]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=6301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:37.048000 audit[6301]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb41750f0 a2=0 a3=7ffcb41750dc items=0 ppid=3166 pid=6301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:37.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:37.186000 audit[6301]: NETFILTER_CFG table=nat:135 family=2 entries=56 op=nft_register_chain pid=6301 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:26:37.186000 audit[6301]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcb41750f0 a2=0 a3=7ffcb41750dc items=0 ppid=3166 pid=6301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:37.186000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:26:37.216000 audit[6296]: NETFILTER_CFG table=filter:136 family=2 entries=292 op=nft_register_chain pid=6296 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:26:37.216000 audit[6296]: SYSCALL arch=c000003e syscall=46 success=yes exit=171632 a0=3 a1=7ffcc8d98c00 a2=0 a3=561cd0a58000 items=0 ppid=5466 pid=6296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:37.216000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:26:37.527801 containerd[1640]: time="2026-01-20T02:26:37.525361584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:26:37.582000 audit[6308]: NETFILTER_CFG table=filter:137 family=2 entries=59 op=nft_register_chain pid=6308 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 02:26:37.582000 audit[6308]: SYSCALL arch=c000003e syscall=46 success=yes exit=29476 a0=3 a1=7ffc60f3dcd0 a2=0 a3=7ffc60f3dcbc items=0 ppid=5466 pid=6308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:26:37.582000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 02:26:37.801044 containerd[1640]: time="2026-01-20T02:26:37.799394942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:37.830642 containerd[1640]: time="2026-01-20T02:26:37.823455001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:26:37.830642 containerd[1640]: time="2026-01-20T02:26:37.823602026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:37.835681 kubelet[3053]: E0120 02:26:37.835427 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:26:37.848258 kubelet[3053]: E0120 02:26:37.835616 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:26:37.849614 kubelet[3053]: E0120 02:26:37.849578 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:37.861454 kubelet[3053]: E0120 02:26:37.861382 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:26:38.569397 containerd[1640]: time="2026-01-20T02:26:38.568436046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:26:38.740701 kubelet[3053]: E0120 02:26:38.740104 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:38.764024 containerd[1640]: time="2026-01-20T02:26:38.763788547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:38.776217 containerd[1640]: time="2026-01-20T02:26:38.776048099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:26:38.776217 containerd[1640]: time="2026-01-20T02:26:38.776174486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:38.780904 kubelet[3053]: E0120 02:26:38.778258 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:26:38.780904 kubelet[3053]: E0120 02:26:38.778330 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:26:38.780904 kubelet[3053]: E0120 02:26:38.778429 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:38.781064 containerd[1640]: time="2026-01-20T02:26:38.780116661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:26:38.952680 containerd[1640]: time="2026-01-20T02:26:38.941143252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:38.972105 containerd[1640]: time="2026-01-20T02:26:38.960875383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:26:38.972105 containerd[1640]: time="2026-01-20T02:26:38.961240747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:38.972497 kubelet[3053]: E0120 02:26:38.972442 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:26:38.973213 kubelet[3053]: E0120 02:26:38.973181 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:26:38.973394 kubelet[3053]: E0120 02:26:38.973365 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:38.973613 kubelet[3053]: E0120 02:26:38.973565 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:26:43.524671 containerd[1640]: time="2026-01-20T02:26:43.517353190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:26:43.719577 containerd[1640]: time="2026-01-20T02:26:43.719234100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:43.760839 containerd[1640]: time="2026-01-20T02:26:43.760623559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:26:43.762045 containerd[1640]: time="2026-01-20T02:26:43.761310754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:43.771149 kubelet[3053]: E0120 02:26:43.763141 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:26:43.771149 kubelet[3053]: E0120 02:26:43.763270 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:26:43.771149 kubelet[3053]: E0120 02:26:43.763582 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:43.792281 containerd[1640]: time="2026-01-20T02:26:43.791677412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:26:43.958767 containerd[1640]: time="2026-01-20T02:26:43.958711566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:43.968434 containerd[1640]: time="2026-01-20T02:26:43.968311367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:26:43.968434 containerd[1640]: time="2026-01-20T02:26:43.968398391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:43.978167 kubelet[3053]: E0120 02:26:43.977333 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:26:43.978167 kubelet[3053]: E0120 02:26:43.977395 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:26:43.978167 kubelet[3053]: E0120 02:26:43.977497 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:43.978167 kubelet[3053]: E0120 02:26:43.977606 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:26:44.531724 containerd[1640]: time="2026-01-20T02:26:44.523597154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:26:44.955382 containerd[1640]: time="2026-01-20T02:26:44.952212285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:44.982744 containerd[1640]: time="2026-01-20T02:26:44.982669338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:26:44.988917 containerd[1640]: time="2026-01-20T02:26:44.984107908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:44.989016 kubelet[3053]: E0120 02:26:44.984638 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:44.989016 kubelet[3053]: E0120 02:26:44.984692 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:44.989016 kubelet[3053]: E0120 02:26:44.984779 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:44.989016 kubelet[3053]: E0120 02:26:44.984849 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:26:45.596952 containerd[1640]: time="2026-01-20T02:26:45.590313267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:26:45.752690 containerd[1640]: time="2026-01-20T02:26:45.752497489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:45.762068 containerd[1640]: time="2026-01-20T02:26:45.762005146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:45.762446 containerd[1640]: time="2026-01-20T02:26:45.762254784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:26:45.764991 kubelet[3053]: E0120 02:26:45.763434 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:26:45.764991 kubelet[3053]: E0120 02:26:45.763509 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:26:45.764991 kubelet[3053]: E0120 02:26:45.763678 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:45.764991 kubelet[3053]: E0120 02:26:45.763726 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:26:49.608363 containerd[1640]: time="2026-01-20T02:26:49.606380403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:26:49.778930 containerd[1640]: time="2026-01-20T02:26:49.778654693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:26:49.817012 containerd[1640]: time="2026-01-20T02:26:49.814971687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:26:49.817012 containerd[1640]: time="2026-01-20T02:26:49.815118641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:26:49.832059 kubelet[3053]: E0120 02:26:49.830742 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:49.832059 kubelet[3053]: E0120 02:26:49.831088 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:26:49.905391 kubelet[3053]: E0120 02:26:49.841262 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:26:49.905391 kubelet[3053]: E0120 02:26:49.841357 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:26:51.526617 kubelet[3053]: E0120 02:26:51.526501 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:26:53.819020 kubelet[3053]: E0120 02:26:53.818604 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:26:56.548216 kubelet[3053]: E0120 02:26:56.546148 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:26:56.767015 containerd[1640]: time="2026-01-20T02:26:56.766896107Z" level=info msg="container event discarded" container=48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb type=CONTAINER_CREATED_EVENT Jan 20 02:26:56.784886 containerd[1640]: time="2026-01-20T02:26:56.784824593Z" level=info msg="container event discarded" container=48e6996f311792dbae4460a025637930876b70abdb08b489137e8fe67cd587bb type=CONTAINER_STARTED_EVENT Jan 20 02:26:57.122843 containerd[1640]: time="2026-01-20T02:26:57.115213494Z" level=info msg="container event discarded" container=fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a type=CONTAINER_CREATED_EVENT Jan 20 02:26:57.604859 kubelet[3053]: E0120 02:26:57.559753 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:26:58.815211 containerd[1640]: time="2026-01-20T02:26:58.815098206Z" level=info msg="container event discarded" container=fc529632033220a9cf38c29ed8e5683e903670fd9b672ab2b2f2bc5e17c7536a type=CONTAINER_STARTED_EVENT Jan 20 02:26:59.591047 kubelet[3053]: E0120 02:26:59.574384 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:26:59.641770 kubelet[3053]: E0120 02:26:59.636691 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:27:01.536250 kubelet[3053]: E0120 02:27:01.519642 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:27:02.721246 containerd[1640]: time="2026-01-20T02:27:02.719141047Z" level=info msg="container event discarded" container=549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c type=CONTAINER_CREATED_EVENT Jan 20 02:27:02.721246 containerd[1640]: time="2026-01-20T02:27:02.719207050Z" level=info msg="container event discarded" container=549fd0f2435fb263abd2e3fa0d85a4942fe97bc2d30bd31403454b283038b54c type=CONTAINER_STARTED_EVENT Jan 20 02:27:04.728277 containerd[1640]: time="2026-01-20T02:27:04.717957839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:27:04.966932 containerd[1640]: time="2026-01-20T02:27:04.966869316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:05.063858 containerd[1640]: time="2026-01-20T02:27:05.063627559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:27:05.063858 containerd[1640]: time="2026-01-20T02:27:05.063779373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:05.066647 kubelet[3053]: E0120 02:27:05.064388 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:27:05.066647 kubelet[3053]: E0120 02:27:05.064599 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:27:05.066647 kubelet[3053]: E0120 02:27:05.064872 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:05.083716 kubelet[3053]: E0120 02:27:05.077623 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:27:07.526599 containerd[1640]: time="2026-01-20T02:27:07.525806300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:27:07.657514 containerd[1640]: time="2026-01-20T02:27:07.657418801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:07.687992 containerd[1640]: time="2026-01-20T02:27:07.687910795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:07.688338 containerd[1640]: time="2026-01-20T02:27:07.688289853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:27:07.688800 kubelet[3053]: E0120 02:27:07.688748 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:27:07.690583 kubelet[3053]: E0120 02:27:07.689434 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:27:07.690583 kubelet[3053]: E0120 02:27:07.689638 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:07.690583 kubelet[3053]: E0120 02:27:07.689695 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:27:08.582281 containerd[1640]: time="2026-01-20T02:27:08.570764192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:27:08.893857 containerd[1640]: time="2026-01-20T02:27:08.893301508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:08.915838 containerd[1640]: time="2026-01-20T02:27:08.914738432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:27:08.915838 containerd[1640]: time="2026-01-20T02:27:08.914879305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:08.916387 kubelet[3053]: E0120 02:27:08.916334 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:27:08.923984 kubelet[3053]: E0120 02:27:08.923916 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:27:08.939098 kubelet[3053]: E0120 02:27:08.932694 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:08.955863 containerd[1640]: time="2026-01-20T02:27:08.955446890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:27:09.124153 containerd[1640]: time="2026-01-20T02:27:09.123995976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:09.193653 containerd[1640]: time="2026-01-20T02:27:09.152558562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:27:09.194158 containerd[1640]: time="2026-01-20T02:27:09.194111959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:09.205264 kubelet[3053]: E0120 02:27:09.196818 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:27:09.205264 kubelet[3053]: E0120 02:27:09.196908 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:27:09.205264 kubelet[3053]: E0120 02:27:09.197408 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:09.205264 kubelet[3053]: E0120 02:27:09.197583 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:27:09.522904 containerd[1640]: time="2026-01-20T02:27:09.519571999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:27:09.817858 containerd[1640]: time="2026-01-20T02:27:09.814787687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:09.897391 containerd[1640]: time="2026-01-20T02:27:09.875310069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:27:09.897770 containerd[1640]: time="2026-01-20T02:27:09.879648022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:09.907512 kubelet[3053]: E0120 02:27:09.903053 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:27:09.907512 kubelet[3053]: E0120 02:27:09.903105 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:27:09.907512 kubelet[3053]: E0120 02:27:09.903212 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:09.907512 kubelet[3053]: E0120 02:27:09.903251 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:27:14.578258 kubelet[3053]: E0120 02:27:14.546741 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:14.755672 containerd[1640]: time="2026-01-20T02:27:14.748637047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:27:14.920852 containerd[1640]: time="2026-01-20T02:27:14.920570544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:14.965433 containerd[1640]: time="2026-01-20T02:27:14.962203293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:14.965433 containerd[1640]: time="2026-01-20T02:27:14.962261201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:27:15.021175 kubelet[3053]: E0120 02:27:14.996849 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:27:15.021175 kubelet[3053]: E0120 02:27:15.012943 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:27:15.021175 kubelet[3053]: E0120 02:27:15.013052 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:15.021468 containerd[1640]: time="2026-01-20T02:27:15.018629865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:27:15.279279 containerd[1640]: time="2026-01-20T02:27:15.265187595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:15.309591 containerd[1640]: time="2026-01-20T02:27:15.309455803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:27:15.309857 containerd[1640]: time="2026-01-20T02:27:15.309687816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:15.311771 kubelet[3053]: E0120 02:27:15.311633 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:27:15.311975 kubelet[3053]: E0120 02:27:15.311910 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:27:15.328141 kubelet[3053]: E0120 02:27:15.327754 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:15.328141 kubelet[3053]: E0120 02:27:15.327834 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:27:15.519027 containerd[1640]: time="2026-01-20T02:27:15.518926052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:27:15.707842 containerd[1640]: time="2026-01-20T02:27:15.707619629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:15.728353 containerd[1640]: time="2026-01-20T02:27:15.728270447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:27:15.728893 containerd[1640]: time="2026-01-20T02:27:15.728837737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:15.732580 kubelet[3053]: E0120 02:27:15.731621 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:27:15.732580 kubelet[3053]: E0120 02:27:15.731697 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:27:15.732580 kubelet[3053]: E0120 02:27:15.731791 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:15.732580 kubelet[3053]: E0120 02:27:15.731833 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:27:16.715686 kubelet[3053]: E0120 02:27:16.695434 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:27:17.523253 kubelet[3053]: E0120 02:27:17.522714 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:18.604228 containerd[1640]: time="2026-01-20T02:27:18.599930377Z" level=info msg="container event discarded" container=3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08 type=CONTAINER_CREATED_EVENT Jan 20 02:27:19.272999 containerd[1640]: time="2026-01-20T02:27:19.268846511Z" level=info msg="container event discarded" container=3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08 type=CONTAINER_STARTED_EVENT Jan 20 02:27:20.569203 kubelet[3053]: E0120 02:27:20.566818 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:27:21.523599 kubelet[3053]: E0120 02:27:21.514697 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:27:23.611851 kubelet[3053]: E0120 02:27:23.611755 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:27:24.614382 kubelet[3053]: E0120 02:27:24.613880 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:26.588280 kubelet[3053]: E0120 02:27:26.588113 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:27:27.570246 kubelet[3053]: E0120 02:27:27.564884 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:27:27.679449 kubelet[3053]: E0120 02:27:27.673951 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:27:31.558606 kubelet[3053]: E0120 02:27:31.558323 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:27:34.690075 kubelet[3053]: E0120 02:27:34.685752 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:27:36.273725 containerd[1640]: time="2026-01-20T02:27:36.268495606Z" level=info msg="container event discarded" container=3adf36a88bcc3c4136c88130a6044180903b5eabb33494c194d2feb3b70eca08 type=CONTAINER_STOPPED_EVENT Jan 20 02:27:36.533444 kubelet[3053]: E0120 02:27:36.533382 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:27:38.382800 containerd[1640]: time="2026-01-20T02:27:38.373615366Z" level=info msg="container event discarded" container=b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9 type=CONTAINER_CREATED_EVENT Jan 20 02:27:38.534396 kubelet[3053]: E0120 02:27:38.534171 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:27:39.535608 kubelet[3053]: E0120 02:27:39.523393 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:39.536353 containerd[1640]: time="2026-01-20T02:27:39.524469996Z" level=info msg="container event discarded" container=b7db0b09fbfeac738b12c9e05aa963119bafa43da4041ccb79e8c79af99479f9 type=CONTAINER_STARTED_EVENT Jan 20 02:27:41.590110 kubelet[3053]: E0120 02:27:41.589272 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:27:41.605759 kubelet[3053]: E0120 02:27:41.602127 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:27:42.624502 kubelet[3053]: E0120 02:27:42.623849 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:27:48.610573 kubelet[3053]: E0120 02:27:48.595029 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:27:49.533710 kubelet[3053]: E0120 02:27:49.533628 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:27:53.558591 containerd[1640]: time="2026-01-20T02:27:53.547277016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:27:53.577270 kubelet[3053]: E0120 02:27:53.567859 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:27:53.752564 containerd[1640]: time="2026-01-20T02:27:53.749812972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:53.755638 containerd[1640]: time="2026-01-20T02:27:53.755586007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:27:53.756576 containerd[1640]: time="2026-01-20T02:27:53.755920981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:53.756933 kubelet[3053]: E0120 02:27:53.756840 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:27:53.757124 kubelet[3053]: E0120 02:27:53.757085 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:27:53.757373 kubelet[3053]: E0120 02:27:53.757278 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:53.775786 kubelet[3053]: E0120 02:27:53.775711 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:27:54.578614 kubelet[3053]: E0120 02:27:54.578055 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:27:55.525946 kubelet[3053]: E0120 02:27:55.521786 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:27:57.574103 containerd[1640]: time="2026-01-20T02:27:57.568273916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:27:57.725872 containerd[1640]: time="2026-01-20T02:27:57.722782265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:27:57.728995 containerd[1640]: time="2026-01-20T02:27:57.728930568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:27:57.729237 containerd[1640]: time="2026-01-20T02:27:57.729212453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:27:57.729681 kubelet[3053]: E0120 02:27:57.729631 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:27:57.730273 kubelet[3053]: E0120 02:27:57.730247 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:27:57.736466 kubelet[3053]: E0120 02:27:57.736405 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:27:57.737258 kubelet[3053]: E0120 02:27:57.737222 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:27:58.533668 kubelet[3053]: E0120 02:27:58.528640 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:00.522106 kubelet[3053]: E0120 02:28:00.521391 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:01.541124 kubelet[3053]: E0120 02:28:01.528705 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:02.540151 containerd[1640]: time="2026-01-20T02:28:02.538945193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:28:02.762791 containerd[1640]: time="2026-01-20T02:28:02.762025662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:28:02.794946 containerd[1640]: time="2026-01-20T02:28:02.794683222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:28:02.899705 containerd[1640]: time="2026-01-20T02:28:02.897913017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:28:02.910506 kubelet[3053]: E0120 02:28:02.898289 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:28:02.910506 kubelet[3053]: E0120 02:28:02.909607 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:28:02.910506 kubelet[3053]: E0120 02:28:02.909965 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:28:02.910506 kubelet[3053]: E0120 02:28:02.910015 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:28:02.925375 containerd[1640]: time="2026-01-20T02:28:02.924420058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:28:03.066241 containerd[1640]: time="2026-01-20T02:28:03.059752055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:28:03.091917 containerd[1640]: time="2026-01-20T02:28:03.091797231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:28:03.095879 containerd[1640]: time="2026-01-20T02:28:03.092266918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:28:03.097487 kubelet[3053]: E0120 02:28:03.097434 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:28:03.097742 kubelet[3053]: E0120 02:28:03.097713 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:28:03.104866 kubelet[3053]: E0120 02:28:03.104812 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:28:03.147429 containerd[1640]: time="2026-01-20T02:28:03.147375380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:28:03.367127 containerd[1640]: time="2026-01-20T02:28:03.364669442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:28:03.389269 containerd[1640]: time="2026-01-20T02:28:03.388837870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:28:03.392258 containerd[1640]: time="2026-01-20T02:28:03.392057592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:28:03.393163 kubelet[3053]: E0120 02:28:03.392963 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:28:03.395201 kubelet[3053]: E0120 02:28:03.393847 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:28:03.397412 kubelet[3053]: E0120 02:28:03.395721 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:28:03.397412 kubelet[3053]: E0120 02:28:03.395801 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:28:05.529623 containerd[1640]: time="2026-01-20T02:28:05.529571635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:28:05.761197 containerd[1640]: time="2026-01-20T02:28:05.760121129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:28:05.805587 containerd[1640]: time="2026-01-20T02:28:05.802661491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:28:05.805587 containerd[1640]: time="2026-01-20T02:28:05.802804848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:28:05.805828 kubelet[3053]: E0120 02:28:05.803143 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:28:05.805828 kubelet[3053]: E0120 02:28:05.803260 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:28:05.805828 kubelet[3053]: E0120 02:28:05.803505 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:28:05.805828 kubelet[3053]: E0120 02:28:05.803605 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:28:07.632683 kubelet[3053]: E0120 02:28:07.632014 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:28:09.541059 containerd[1640]: time="2026-01-20T02:28:09.540936371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:28:09.569829 kubelet[3053]: E0120 02:28:09.565648 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:28:09.785071 containerd[1640]: time="2026-01-20T02:28:09.785002369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:28:09.817362 containerd[1640]: time="2026-01-20T02:28:09.817121900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:28:09.817718 containerd[1640]: time="2026-01-20T02:28:09.817647821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:28:09.822170 kubelet[3053]: E0120 02:28:09.822103 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:28:09.833298 kubelet[3053]: E0120 02:28:09.823965 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:28:09.833298 kubelet[3053]: E0120 02:28:09.828249 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:28:09.834018 containerd[1640]: time="2026-01-20T02:28:09.830956091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:28:10.014194 containerd[1640]: time="2026-01-20T02:28:10.013908139Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:28:10.041661 containerd[1640]: time="2026-01-20T02:28:10.041484745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:28:10.058304 kubelet[3053]: E0120 02:28:10.042874 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:28:10.058304 kubelet[3053]: E0120 02:28:10.042958 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:28:10.058304 kubelet[3053]: E0120 02:28:10.044910 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:28:10.058304 kubelet[3053]: E0120 02:28:10.044979 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:28:10.069860 containerd[1640]: time="2026-01-20T02:28:10.041951846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:28:14.609193 kubelet[3053]: E0120 02:28:14.608724 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:28:17.527430 kubelet[3053]: E0120 02:28:17.526791 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:28:18.525817 kubelet[3053]: E0120 02:28:18.524214 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:28:20.555415 kubelet[3053]: E0120 02:28:20.554917 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:28:20.565385 kubelet[3053]: E0120 02:28:20.565317 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:28:23.578304 kubelet[3053]: E0120 02:28:23.578224 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:28:29.542738 kubelet[3053]: E0120 02:28:29.541823 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:28:29.570832 kubelet[3053]: E0120 02:28:29.542515 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:28:33.520800 kubelet[3053]: E0120 02:28:33.518235 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:28:33.520800 kubelet[3053]: E0120 02:28:33.520443 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:28:34.540146 kubelet[3053]: E0120 02:28:34.527894 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:28:35.520408 kubelet[3053]: E0120 02:28:35.517704 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:28:36.516487 kubelet[3053]: E0120 02:28:36.516073 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:39.529322 kubelet[3053]: E0120 02:28:39.528185 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:39.529322 kubelet[3053]: E0120 02:28:39.529185 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:41.755589 containerd[1640]: time="2026-01-20T02:28:41.755122995Z" level=info msg="container event discarded" container=ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781 type=CONTAINER_CREATED_EVENT Jan 20 02:28:41.755589 containerd[1640]: time="2026-01-20T02:28:41.755221178Z" level=info msg="container event discarded" container=ae84de9c51e820944ea345533a1a31aa6ee2df1028674b301d17a8b74075f781 type=CONTAINER_STARTED_EVENT Jan 20 02:28:41.778746 containerd[1640]: time="2026-01-20T02:28:41.778354218Z" level=info msg="container event discarded" container=321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113 type=CONTAINER_CREATED_EVENT Jan 20 02:28:41.778746 containerd[1640]: time="2026-01-20T02:28:41.778419438Z" level=info msg="container event discarded" container=321f3da913be746f14abc12b7a74b61c5908daffe0894641fb84ffc2a75fd113 type=CONTAINER_STARTED_EVENT Jan 20 02:28:42.568077 kubelet[3053]: E0120 02:28:42.567819 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:28:42.574871 kubelet[3053]: E0120 02:28:42.574495 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:28:45.529911 containerd[1640]: time="2026-01-20T02:28:45.529796893Z" level=info msg="container event discarded" container=29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e type=CONTAINER_CREATED_EVENT Jan 20 02:28:46.542628 kubelet[3053]: E0120 02:28:46.539711 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:28:46.563166 kubelet[3053]: E0120 02:28:46.562956 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:28:49.535429 kubelet[3053]: E0120 02:28:49.528029 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:28:49.581194 kubelet[3053]: E0120 02:28:49.580985 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:28:53.514597 kubelet[3053]: E0120 02:28:53.514458 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:54.567996 kubelet[3053]: E0120 02:28:54.558853 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:28:56.533294 kubelet[3053]: E0120 02:28:56.522499 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:28:57.590197 kubelet[3053]: E0120 02:28:57.584427 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:28:57.595077 kubelet[3053]: E0120 02:28:57.595025 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:28:58.523786 kubelet[3053]: E0120 02:28:58.523610 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:28:59.900281 containerd[1640]: time="2026-01-20T02:28:59.894966605Z" level=info msg="container event discarded" container=29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e type=CONTAINER_STARTED_EVENT Jan 20 02:29:00.540824 kubelet[3053]: E0120 02:29:00.533179 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:29:01.573274 kubelet[3053]: E0120 02:29:01.572279 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:29:02.711398 containerd[1640]: time="2026-01-20T02:29:02.711202710Z" level=info msg="container event discarded" container=29dcb58a629eb10b45db8cab662d44d5b45d7820af2670b4985a8923349b695e type=CONTAINER_STOPPED_EVENT Jan 20 02:29:07.527009 kubelet[3053]: E0120 02:29:07.526900 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:29:09.516488 kubelet[3053]: E0120 02:29:09.516421 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:29:10.556260 kubelet[3053]: E0120 02:29:10.550928 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:29:10.802911 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:44800.service - OpenSSH per-connection server daemon (10.0.0.1:44800). Jan 20 02:29:10.826003 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 20 02:29:10.855170 kernel: audit: type=1130 audit(1768876150.807:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.97:22-10.0.0.1:44800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:10.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.97:22-10.0.0.1:44800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:11.922160 kernel: audit: type=1101 audit(1768876151.900:756): pid=6548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:11.900000 audit[6548]: USER_ACCT pid=6548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:11.922589 sshd[6548]: Accepted publickey for core from 10.0.0.1 port 44800 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:11.910000 audit[6548]: CRED_ACQ pid=6548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:11.927394 sshd-session[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:11.967054 kernel: audit: type=1103 audit(1768876151.910:757): pid=6548 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:11.967251 kernel: audit: type=1006 audit(1768876151.910:758): pid=6548 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 02:29:11.910000 audit[6548]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5de7cc90 a2=3 a3=0 items=0 ppid=1 pid=6548 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:12.029769 kernel: audit: type=1300 audit(1768876151.910:758): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5de7cc90 a2=3 a3=0 items=0 ppid=1 pid=6548 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:12.004648 systemd-logind[1619]: New session 11 of user core. Jan 20 02:29:11.910000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:12.058693 kernel: audit: type=1327 audit(1768876151.910:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:12.034247 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:29:12.059000 audit[6548]: USER_START pid=6548 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:12.082000 audit[6555]: CRED_ACQ pid=6555 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:12.129514 kernel: audit: type=1105 audit(1768876152.059:759): pid=6548 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:12.129899 kernel: audit: type=1103 audit(1768876152.082:760): pid=6555 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:12.550804 kubelet[3053]: E0120 02:29:12.535429 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:29:12.738596 containerd[1640]: time="2026-01-20T02:29:12.738464277Z" level=info msg="container event discarded" container=10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005 type=CONTAINER_CREATED_EVENT Jan 20 02:29:13.159716 sshd[6555]: Connection closed by 10.0.0.1 port 44800 Jan 20 02:29:13.165826 sshd-session[6548]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:13.191000 audit[6548]: USER_END pid=6548 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:13.231793 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:44800.service: Deactivated successfully. Jan 20 02:29:13.191000 audit[6548]: CRED_DISP pid=6548 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:13.266268 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:29:13.288711 kernel: audit: type=1106 audit(1768876153.191:761): pid=6548 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:13.288946 kernel: audit: type=1104 audit(1768876153.191:762): pid=6548 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:13.282212 systemd-logind[1619]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:29:13.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.97:22-10.0.0.1:44800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:13.301294 systemd-logind[1619]: Removed session 11. Jan 20 02:29:13.529632 kubelet[3053]: E0120 02:29:13.524083 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:29:14.560581 kubelet[3053]: E0120 02:29:14.558930 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:15.584477 kubelet[3053]: E0120 02:29:15.584347 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:29:18.491869 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:29:18.492098 kernel: audit: type=1130 audit(1768876158.409:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.97:22-10.0.0.1:60548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:18.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.97:22-10.0.0.1:60548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:18.492281 containerd[1640]: time="2026-01-20T02:29:18.409205373Z" level=info msg="container event discarded" container=10aded405619ff2d675637018942053803fa355cbb83079b53dbf7b6f6dfc005 type=CONTAINER_STARTED_EVENT Jan 20 02:29:18.420794 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:60548.service - OpenSSH per-connection server daemon (10.0.0.1:60548). Jan 20 02:29:18.667992 kubelet[3053]: E0120 02:29:18.665843 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:29:19.090000 audit[6586]: USER_ACCT pid=6586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.098829 sshd[6586]: Accepted publickey for core from 10.0.0.1 port 60548 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:19.105785 sshd-session[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:19.138292 systemd-logind[1619]: New session 12 of user core. Jan 20 02:29:19.166139 kernel: audit: type=1101 audit(1768876159.090:765): pid=6586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.166234 kernel: audit: type=1103 audit(1768876159.092:766): pid=6586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.092000 audit[6586]: CRED_ACQ pid=6586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.199714 kernel: audit: type=1006 audit(1768876159.092:767): pid=6586 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 02:29:19.199899 kernel: audit: type=1300 audit(1768876159.092:767): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14c52170 a2=3 a3=0 items=0 ppid=1 pid=6586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:19.092000 audit[6586]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14c52170 a2=3 a3=0 items=0 ppid=1 pid=6586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:19.255973 kernel: audit: type=1327 audit(1768876159.092:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:19.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:19.256693 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:29:19.295000 audit[6586]: USER_START pid=6586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.350674 kernel: audit: type=1105 audit(1768876159.295:768): pid=6586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.363000 audit[6590]: CRED_ACQ pid=6590 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.418078 kernel: audit: type=1103 audit(1768876159.363:769): pid=6590 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.867148 sshd[6590]: Connection closed by 10.0.0.1 port 60548 Jan 20 02:29:19.871197 sshd-session[6586]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:19.879000 audit[6586]: USER_END pid=6586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.879000 audit[6586]: CRED_DISP pid=6586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.928090 systemd-logind[1619]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:29:19.929664 kernel: audit: type=1106 audit(1768876159.879:770): pid=6586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.929753 kernel: audit: type=1104 audit(1768876159.879:771): pid=6586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:19.930034 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:60548.service: Deactivated successfully. Jan 20 02:29:19.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.97:22-10.0.0.1:60548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:19.958183 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:29:19.962712 systemd-logind[1619]: Removed session 12. Jan 20 02:29:20.518559 kubelet[3053]: E0120 02:29:20.518350 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:20.538183 kubelet[3053]: E0120 02:29:20.531625 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:24.564067 containerd[1640]: time="2026-01-20T02:29:24.554122586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:29:24.725577 containerd[1640]: time="2026-01-20T02:29:24.722638998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:24.753418 containerd[1640]: time="2026-01-20T02:29:24.751734548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:29:24.753418 containerd[1640]: time="2026-01-20T02:29:24.751933438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:24.764783 kubelet[3053]: E0120 02:29:24.761422 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:29:24.764783 kubelet[3053]: E0120 02:29:24.761982 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:29:24.764783 kubelet[3053]: E0120 02:29:24.762478 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:24.764783 kubelet[3053]: E0120 02:29:24.762835 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:29:24.768223 containerd[1640]: time="2026-01-20T02:29:24.767804999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:29:24.919164 containerd[1640]: time="2026-01-20T02:29:24.918173223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:24.940842 containerd[1640]: time="2026-01-20T02:29:24.940649236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:29:24.940842 containerd[1640]: time="2026-01-20T02:29:24.940785961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:24.950561 kubelet[3053]: E0120 02:29:24.946300 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:29:24.950561 kubelet[3053]: E0120 02:29:24.946372 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:29:24.950561 kubelet[3053]: E0120 02:29:24.946506 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:24.957606 containerd[1640]: time="2026-01-20T02:29:24.957366423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:29:24.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.97:22-10.0.0.1:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:24.984870 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:29:24.985219 kernel: audit: type=1130 audit(1768876164.961:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.97:22-10.0.0.1:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:24.964209 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Jan 20 02:29:25.161579 containerd[1640]: time="2026-01-20T02:29:25.159335982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:25.193953 containerd[1640]: time="2026-01-20T02:29:25.191827788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:25.193953 containerd[1640]: time="2026-01-20T02:29:25.191930199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:29:25.197064 kubelet[3053]: E0120 02:29:25.195092 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:29:25.197064 kubelet[3053]: E0120 02:29:25.195177 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:29:25.206589 kubelet[3053]: E0120 02:29:25.200066 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:25.206589 kubelet[3053]: E0120 02:29:25.202717 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:29:25.306000 audit[6605]: USER_ACCT pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.337057 sshd[6605]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:25.337681 kernel: audit: type=1101 audit(1768876165.306:774): pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.321000 audit[6605]: CRED_ACQ pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.339322 sshd-session[6605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:25.398508 kernel: audit: type=1103 audit(1768876165.321:775): pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.414774 systemd-logind[1619]: New session 13 of user core. Jan 20 02:29:25.321000 audit[6605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdc145250 a2=3 a3=0 items=0 ppid=1 pid=6605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:25.480733 kernel: audit: type=1006 audit(1768876165.321:776): pid=6605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 02:29:25.480891 kernel: audit: type=1300 audit(1768876165.321:776): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdc145250 a2=3 a3=0 items=0 ppid=1 pid=6605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:25.321000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:25.510769 kernel: audit: type=1327 audit(1768876165.321:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:25.519258 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:29:25.551811 containerd[1640]: time="2026-01-20T02:29:25.551507371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:29:25.618000 audit[6605]: USER_START pid=6605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.695656 kernel: audit: type=1105 audit(1768876165.618:777): pid=6605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.695822 kernel: audit: type=1103 audit(1768876165.654:778): pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.654000 audit[6609]: CRED_ACQ pid=6609 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:25.748714 containerd[1640]: time="2026-01-20T02:29:25.746224311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:25.770834 containerd[1640]: time="2026-01-20T02:29:25.764341608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:29:25.770834 containerd[1640]: time="2026-01-20T02:29:25.765058081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:25.772207 kubelet[3053]: E0120 02:29:25.771831 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:29:25.778204 kubelet[3053]: E0120 02:29:25.772099 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:29:25.790092 kubelet[3053]: E0120 02:29:25.778398 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:25.790092 kubelet[3053]: E0120 02:29:25.784362 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:29:26.631000 audit[6605]: USER_END pid=6605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:26.640503 sshd[6609]: Connection closed by 10.0.0.1 port 55586 Jan 20 02:29:26.633848 sshd-session[6605]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:26.645667 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:55586.service: Deactivated successfully. Jan 20 02:29:26.663405 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:29:26.631000 audit[6605]: CRED_DISP pid=6605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:26.708615 systemd-logind[1619]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:29:26.719634 systemd-logind[1619]: Removed session 13. Jan 20 02:29:26.747956 kernel: audit: type=1106 audit(1768876166.631:779): pid=6605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:26.748101 kernel: audit: type=1104 audit(1768876166.631:780): pid=6605 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:26.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.97:22-10.0.0.1:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:27.526893 kubelet[3053]: E0120 02:29:27.523349 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:29:27.555094 containerd[1640]: time="2026-01-20T02:29:27.554702623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:29:27.698142 containerd[1640]: time="2026-01-20T02:29:27.698073303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:27.713911 containerd[1640]: time="2026-01-20T02:29:27.713706592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:29:27.713911 containerd[1640]: time="2026-01-20T02:29:27.713859036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:27.714802 kubelet[3053]: E0120 02:29:27.714720 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:29:27.715067 kubelet[3053]: E0120 02:29:27.714966 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:29:27.715418 kubelet[3053]: E0120 02:29:27.715338 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:27.715730 kubelet[3053]: E0120 02:29:27.715643 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:29:31.845951 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:29:31.846097 kernel: audit: type=1130 audit(1768876171.821:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.97:22-10.0.0.1:55618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:31.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.97:22-10.0.0.1:55618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:31.828164 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:55618.service - OpenSSH per-connection server daemon (10.0.0.1:55618). Jan 20 02:29:32.520000 audit[6626]: USER_ACCT pid=6626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.532944 sshd[6626]: Accepted publickey for core from 10.0.0.1 port 55618 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:32.549323 sshd-session[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:32.535000 audit[6626]: CRED_ACQ pid=6626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.586828 kernel: audit: type=1101 audit(1768876172.520:783): pid=6626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.586961 kernel: audit: type=1103 audit(1768876172.535:784): pid=6626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.657611 kernel: audit: type=1006 audit(1768876172.540:785): pid=6626 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 20 02:29:32.657803 kernel: audit: type=1300 audit(1768876172.540:785): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc87b36470 a2=3 a3=0 items=0 ppid=1 pid=6626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:32.540000 audit[6626]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc87b36470 a2=3 a3=0 items=0 ppid=1 pid=6626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:32.628724 systemd-logind[1619]: New session 14 of user core. Jan 20 02:29:32.540000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:32.711779 kernel: audit: type=1327 audit(1768876172.540:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:32.760886 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:29:32.797000 audit[6626]: USER_START pid=6626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.923611 kernel: audit: type=1105 audit(1768876172.797:786): pid=6626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.923781 kernel: audit: type=1103 audit(1768876172.839:787): pid=6630 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:32.839000 audit[6630]: CRED_ACQ pid=6630 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:33.519638 sshd[6630]: Connection closed by 10.0.0.1 port 55618 Jan 20 02:29:33.523331 sshd-session[6626]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:33.540821 containerd[1640]: time="2026-01-20T02:29:33.534977660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:29:33.539000 audit[6626]: USER_END pid=6626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:33.563660 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:55618.service: Deactivated successfully. Jan 20 02:29:33.569625 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:29:33.616349 kernel: audit: type=1106 audit(1768876173.539:788): pid=6626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:33.572622 systemd-logind[1619]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:29:33.580102 systemd-logind[1619]: Removed session 14. Jan 20 02:29:33.539000 audit[6626]: CRED_DISP pid=6626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:33.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.97:22-10.0.0.1:55618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:33.681395 kernel: audit: type=1104 audit(1768876173.539:789): pid=6626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:33.699597 containerd[1640]: time="2026-01-20T02:29:33.697405919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:33.699794 containerd[1640]: time="2026-01-20T02:29:33.699592788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:29:33.699794 containerd[1640]: time="2026-01-20T02:29:33.699699927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:33.710196 kubelet[3053]: E0120 02:29:33.710001 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:29:33.723660 kubelet[3053]: E0120 02:29:33.712113 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:29:33.723660 kubelet[3053]: E0120 02:29:33.718214 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:33.727491 kubelet[3053]: E0120 02:29:33.724652 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:29:37.563077 kubelet[3053]: E0120 02:29:37.553201 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:29:38.545193 kubelet[3053]: E0120 02:29:38.536233 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:29:38.777457 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:29:38.780724 kernel: audit: type=1130 audit(1768876178.681:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.97:22-10.0.0.1:43546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:38.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.97:22-10.0.0.1:43546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:38.681472 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:43546.service - OpenSSH per-connection server daemon (10.0.0.1:43546). Jan 20 02:29:39.318000 audit[6681]: USER_ACCT pid=6681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.329210 sshd[6681]: Accepted publickey for core from 10.0.0.1 port 43546 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:39.371243 containerd[1640]: time="2026-01-20T02:29:39.371014684Z" level=info msg="container event discarded" container=7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f type=CONTAINER_CREATED_EVENT Jan 20 02:29:39.389225 kernel: audit: type=1101 audit(1768876179.318:792): pid=6681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.389346 kernel: audit: type=1103 audit(1768876179.343:793): pid=6681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.343000 audit[6681]: CRED_ACQ pid=6681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.384051 sshd-session[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:39.431890 systemd-logind[1619]: New session 15 of user core. Jan 20 02:29:39.439464 kernel: audit: type=1006 audit(1768876179.369:794): pid=6681 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 02:29:39.369000 audit[6681]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea52d8c90 a2=3 a3=0 items=0 ppid=1 pid=6681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:39.522666 kernel: audit: type=1300 audit(1768876179.369:794): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea52d8c90 a2=3 a3=0 items=0 ppid=1 pid=6681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:39.522786 kernel: audit: type=1327 audit(1768876179.369:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:39.369000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:39.523918 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:29:39.535660 kubelet[3053]: E0120 02:29:39.530721 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:29:39.612000 audit[6681]: USER_START pid=6681 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.686698 kernel: audit: type=1105 audit(1768876179.612:795): pid=6681 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.705000 audit[6693]: CRED_ACQ pid=6693 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:39.726664 kernel: audit: type=1103 audit(1768876179.705:796): pid=6693 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:40.590886 kubelet[3053]: E0120 02:29:40.590621 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:29:40.617844 containerd[1640]: time="2026-01-20T02:29:40.612260607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:29:40.939782 containerd[1640]: time="2026-01-20T02:29:40.931219753Z" level=info msg="container event discarded" container=7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f type=CONTAINER_STARTED_EVENT Jan 20 02:29:40.939782 containerd[1640]: time="2026-01-20T02:29:40.931436096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:41.015448 containerd[1640]: time="2026-01-20T02:29:41.015256260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:29:41.015448 containerd[1640]: time="2026-01-20T02:29:41.015382273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:41.015983 kubelet[3053]: E0120 02:29:41.015938 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:29:41.026279 kubelet[3053]: E0120 02:29:41.016390 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:29:41.026279 kubelet[3053]: E0120 02:29:41.016496 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:41.086746 containerd[1640]: time="2026-01-20T02:29:41.083734461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:29:41.215415 sshd[6693]: Connection closed by 10.0.0.1 port 43546 Jan 20 02:29:41.230030 sshd-session[6681]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:41.325000 audit[6681]: USER_END pid=6681 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:41.397130 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:43546.service: Deactivated successfully. Jan 20 02:29:41.492232 kernel: audit: type=1106 audit(1768876181.325:797): pid=6681 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:41.325000 audit[6681]: CRED_DISP pid=6681 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:41.501751 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:29:41.633216 kernel: audit: type=1104 audit(1768876181.325:798): pid=6681 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:41.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.97:22-10.0.0.1:43546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:41.638933 systemd-logind[1619]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:29:41.742762 containerd[1640]: time="2026-01-20T02:29:41.737996285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:29:41.775774 systemd-logind[1619]: Removed session 15. Jan 20 02:29:41.789457 containerd[1640]: time="2026-01-20T02:29:41.789251577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:29:41.789457 containerd[1640]: time="2026-01-20T02:29:41.789409580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:29:41.793818 kubelet[3053]: E0120 02:29:41.790927 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:29:41.793818 kubelet[3053]: E0120 02:29:41.790987 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:29:41.793818 kubelet[3053]: E0120 02:29:41.791081 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:29:41.793818 kubelet[3053]: E0120 02:29:41.791143 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:29:46.316491 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:46892.service - OpenSSH per-connection server daemon (10.0.0.1:46892). Jan 20 02:29:46.374220 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:29:46.374472 kernel: audit: type=1130 audit(1768876186.337:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.97:22-10.0.0.1:46892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:46.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.97:22-10.0.0.1:46892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:47.673877 kubelet[3053]: E0120 02:29:47.673810 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:29:47.964000 audit[6709]: USER_ACCT pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:48.039678 kernel: audit: type=1101 audit(1768876187.964:801): pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:48.039836 kernel: audit: type=1103 audit(1768876187.964:802): pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:47.964000 audit[6709]: CRED_ACQ pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:48.040119 sshd[6709]: Accepted publickey for core from 10.0.0.1 port 46892 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:47.972961 sshd-session[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:47.998416 systemd-logind[1619]: New session 16 of user core. Jan 20 02:29:48.116681 kernel: audit: type=1006 audit(1768876187.964:803): pid=6709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 02:29:47.964000 audit[6709]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecdaecdd0 a2=3 a3=0 items=0 ppid=1 pid=6709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:48.176878 kernel: audit: type=1300 audit(1768876187.964:803): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecdaecdd0 a2=3 a3=0 items=0 ppid=1 pid=6709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:48.180606 kernel: audit: type=1327 audit(1768876187.964:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:47.964000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:48.185202 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:29:48.298000 audit[6709]: USER_START pid=6709 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:48.381115 kernel: audit: type=1105 audit(1768876188.298:804): pid=6709 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:48.353000 audit[6713]: CRED_ACQ pid=6713 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:48.478016 kernel: audit: type=1103 audit(1768876188.353:805): pid=6713 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:49.538822 kubelet[3053]: E0120 02:29:49.538098 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:29:49.783693 sshd[6713]: Connection closed by 10.0.0.1 port 46892 Jan 20 02:29:49.796595 sshd-session[6709]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:49.818000 audit[6709]: USER_END pid=6709 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:49.935951 kernel: audit: type=1106 audit(1768876189.818:806): pid=6709 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:49.943065 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:46892.service: Deactivated successfully. Jan 20 02:29:49.824000 audit[6709]: CRED_DISP pid=6709 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:50.001817 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:29:50.022787 kernel: audit: type=1104 audit(1768876189.824:807): pid=6709 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:50.022678 systemd-logind[1619]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:29:49.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.97:22-10.0.0.1:46892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:50.086049 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:29:50.094511 systemd-logind[1619]: Removed session 16. Jan 20 02:29:50.904179 systemd-tmpfiles[6729]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:29:50.906794 systemd-tmpfiles[6729]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:29:50.911296 systemd-tmpfiles[6729]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:29:50.935446 systemd-tmpfiles[6729]: ACLs are not supported, ignoring. Jan 20 02:29:50.935618 systemd-tmpfiles[6729]: ACLs are not supported, ignoring. Jan 20 02:29:50.997958 kubelet[3053]: E0120 02:29:50.987431 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:29:51.096474 systemd-tmpfiles[6729]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:29:51.097873 systemd-tmpfiles[6729]: Skipping /boot Jan 20 02:29:51.194186 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:29:51.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:51.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:51.205057 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:29:51.611357 kubelet[3053]: E0120 02:29:51.610938 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:29:51.866485 containerd[1640]: time="2026-01-20T02:29:51.866340597Z" level=info msg="container event discarded" container=7019d87d3471e3951de873ac195eeec0b56d8d6069bcefcb1aab35a6cc29707f type=CONTAINER_STOPPED_EVENT Jan 20 02:29:52.544193 kubelet[3053]: E0120 02:29:52.541059 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:29:52.605586 kubelet[3053]: E0120 02:29:52.605431 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:29:54.524616 kubelet[3053]: E0120 02:29:54.524160 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:54.891851 kernel: kauditd_printk_skb: 3 callbacks suppressed Jan 20 02:29:54.892029 kernel: audit: type=1130 audit(1768876194.867:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.97:22-10.0.0.1:53904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:54.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.97:22-10.0.0.1:53904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:54.867675 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:53904.service - OpenSSH per-connection server daemon (10.0.0.1:53904). Jan 20 02:29:55.479312 sshd[6735]: Accepted publickey for core from 10.0.0.1 port 53904 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:29:55.476000 audit[6735]: USER_ACCT pid=6735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:55.528493 sshd-session[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:29:55.541753 kernel: audit: type=1101 audit(1768876195.476:812): pid=6735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:55.495000 audit[6735]: CRED_ACQ pid=6735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:55.612765 kernel: audit: type=1103 audit(1768876195.495:813): pid=6735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:55.656413 kernel: audit: type=1006 audit(1768876195.495:814): pid=6735 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 02:29:55.495000 audit[6735]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca2c14860 a2=3 a3=0 items=0 ppid=1 pid=6735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:55.732811 kernel: audit: type=1300 audit(1768876195.495:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca2c14860 a2=3 a3=0 items=0 ppid=1 pid=6735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:29:55.732977 kernel: audit: type=1327 audit(1768876195.495:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:55.495000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:29:55.804323 systemd-logind[1619]: New session 17 of user core. Jan 20 02:29:55.845388 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:29:55.915000 audit[6735]: USER_START pid=6735 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:56.006933 kernel: audit: type=1105 audit(1768876195.915:815): pid=6735 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:55.938000 audit[6739]: CRED_ACQ pid=6739 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:56.042765 kernel: audit: type=1103 audit(1768876195.938:816): pid=6739 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:56.617740 kubelet[3053]: E0120 02:29:56.611567 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:29:57.347078 sshd[6739]: Connection closed by 10.0.0.1 port 53904 Jan 20 02:29:57.366735 sshd-session[6735]: pam_unix(sshd:session): session closed for user core Jan 20 02:29:57.396000 audit[6735]: USER_END pid=6735 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:57.451389 systemd-logind[1619]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:29:57.464794 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:53904.service: Deactivated successfully. Jan 20 02:29:57.486167 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:29:57.506079 kernel: audit: type=1106 audit(1768876197.396:817): pid=6735 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:57.396000 audit[6735]: CRED_DISP pid=6735 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:29:57.531276 systemd-logind[1619]: Removed session 17. Jan 20 02:29:57.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.97:22-10.0.0.1:53904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:29:57.566701 kernel: audit: type=1104 audit(1768876197.396:818): pid=6735 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:00.524873 kubelet[3053]: E0120 02:30:00.523293 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:02.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.97:22-10.0.0.1:53958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:02.419235 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:53958.service - OpenSSH per-connection server daemon (10.0.0.1:53958). Jan 20 02:30:02.439829 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:02.439967 kernel: audit: type=1130 audit(1768876202.415:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.97:22-10.0.0.1:53958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:02.562614 kubelet[3053]: E0120 02:30:02.562363 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:30:02.857576 kernel: audit: type=1101 audit(1768876202.811:821): pid=6758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:02.860088 kernel: audit: type=1103 audit(1768876202.829:822): pid=6758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:02.811000 audit[6758]: USER_ACCT pid=6758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:02.829000 audit[6758]: CRED_ACQ pid=6758 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:02.860308 sshd[6758]: Accepted publickey for core from 10.0.0.1 port 53958 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:02.865768 sshd-session[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:02.921346 kernel: audit: type=1006 audit(1768876202.829:823): pid=6758 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 02:30:02.907703 systemd-logind[1619]: New session 18 of user core. Jan 20 02:30:02.829000 audit[6758]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecdee5270 a2=3 a3=0 items=0 ppid=1 pid=6758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:02.977033 kernel: audit: type=1300 audit(1768876202.829:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecdee5270 a2=3 a3=0 items=0 ppid=1 pid=6758 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:02.829000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:02.998623 kernel: audit: type=1327 audit(1768876202.829:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:02.988253 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:30:03.042000 audit[6758]: USER_START pid=6758 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:03.103342 kernel: audit: type=1105 audit(1768876203.042:824): pid=6758 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:03.103506 kernel: audit: type=1103 audit(1768876203.067:825): pid=6762 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:03.067000 audit[6762]: CRED_ACQ pid=6762 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:03.524277 kubelet[3053]: E0120 02:30:03.523827 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:30:04.075784 sshd[6762]: Connection closed by 10.0.0.1 port 53958 Jan 20 02:30:04.082982 sshd-session[6758]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:04.106000 audit[6758]: USER_END pid=6758 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:04.178589 kernel: audit: type=1106 audit(1768876204.106:826): pid=6758 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:04.106000 audit[6758]: CRED_DISP pid=6758 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:04.200894 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:53958.service: Deactivated successfully. Jan 20 02:30:04.209375 kernel: audit: type=1104 audit(1768876204.106:827): pid=6758 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:04.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.97:22-10.0.0.1:53958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:04.220896 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:30:04.262818 systemd-logind[1619]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:30:04.273133 systemd-logind[1619]: Removed session 18. Jan 20 02:30:04.542590 kubelet[3053]: E0120 02:30:04.542296 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:30:04.563328 kubelet[3053]: E0120 02:30:04.563104 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:30:05.521634 kubelet[3053]: E0120 02:30:05.518024 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:30:06.615596 kubelet[3053]: E0120 02:30:06.615416 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:30:08.514489 kubelet[3053]: E0120 02:30:08.514413 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:09.185694 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:47652.service - OpenSSH per-connection server daemon (10.0.0.1:47652). Jan 20 02:30:09.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.97:22-10.0.0.1:47652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:09.217290 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:09.217488 kernel: audit: type=1130 audit(1768876209.183:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.97:22-10.0.0.1:47652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:09.949000 audit[6800]: USER_ACCT pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.007760 kernel: audit: type=1101 audit(1768876209.949:830): pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.014198 sshd[6800]: Accepted publickey for core from 10.0.0.1 port 47652 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:10.024106 sshd-session[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:10.010000 audit[6800]: CRED_ACQ pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.060598 kernel: audit: type=1103 audit(1768876210.010:831): pid=6800 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.119883 kernel: audit: type=1006 audit(1768876210.010:832): pid=6800 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 02:30:10.120960 kernel: audit: type=1300 audit(1768876210.010:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8162c0e0 a2=3 a3=0 items=0 ppid=1 pid=6800 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:10.010000 audit[6800]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8162c0e0 a2=3 a3=0 items=0 ppid=1 pid=6800 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:10.136670 systemd-logind[1619]: New session 19 of user core. Jan 20 02:30:10.010000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:10.195565 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:30:10.196123 kernel: audit: type=1327 audit(1768876210.010:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:10.239000 audit[6800]: USER_START pid=6800 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.309606 kernel: audit: type=1105 audit(1768876210.239:833): pid=6800 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.310605 kernel: audit: type=1103 audit(1768876210.276:834): pid=6804 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.276000 audit[6804]: CRED_ACQ pid=6804 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:10.671629 kubelet[3053]: E0120 02:30:10.670515 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:11.524957 sshd[6804]: Connection closed by 10.0.0.1 port 47652 Jan 20 02:30:11.526934 sshd-session[6800]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:11.602000 audit[6800]: USER_END pid=6800 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:11.623274 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:47652.service: Deactivated successfully. Jan 20 02:30:11.685189 kernel: audit: type=1106 audit(1768876211.602:835): pid=6800 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:11.688974 kernel: audit: type=1104 audit(1768876211.603:836): pid=6800 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:11.603000 audit[6800]: CRED_DISP pid=6800 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:11.652374 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:30:11.666451 systemd-logind[1619]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:30:11.683770 systemd-logind[1619]: Removed session 19. Jan 20 02:30:11.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.97:22-10.0.0.1:47652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:14.619305 kubelet[3053]: E0120 02:30:14.615351 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:30:15.584931 kubelet[3053]: E0120 02:30:15.582220 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:30:15.592369 kubelet[3053]: E0120 02:30:15.592275 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:30:16.529061 kubelet[3053]: E0120 02:30:16.524464 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:30:16.656938 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:47488.service - OpenSSH per-connection server daemon (10.0.0.1:47488). Jan 20 02:30:16.763468 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:16.782090 kernel: audit: type=1130 audit(1768876216.655:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.97:22-10.0.0.1:47488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:16.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.97:22-10.0.0.1:47488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:17.308000 audit[6829]: USER_ACCT pid=6829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.326134 sshd-session[6829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:17.335029 kernel: audit: type=1101 audit(1768876217.308:839): pid=6829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.335085 sshd[6829]: Accepted publickey for core from 10.0.0.1 port 47488 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:17.315000 audit[6829]: CRED_ACQ pid=6829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.363055 kernel: audit: type=1103 audit(1768876217.315:840): pid=6829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.380289 systemd-logind[1619]: New session 20 of user core. Jan 20 02:30:17.315000 audit[6829]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd16a53d80 a2=3 a3=0 items=0 ppid=1 pid=6829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:17.419906 kernel: audit: type=1006 audit(1768876217.315:841): pid=6829 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 02:30:17.420051 kernel: audit: type=1300 audit(1768876217.315:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd16a53d80 a2=3 a3=0 items=0 ppid=1 pid=6829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:17.420088 kernel: audit: type=1327 audit(1768876217.315:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:17.315000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:17.449070 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:30:17.522578 kernel: audit: type=1105 audit(1768876217.485:842): pid=6829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.485000 audit[6829]: USER_START pid=6829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.531000 audit[6833]: CRED_ACQ pid=6833 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:17.571264 kernel: audit: type=1103 audit(1768876217.531:843): pid=6833 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:18.391971 sshd[6833]: Connection closed by 10.0.0.1 port 47488 Jan 20 02:30:18.395705 sshd-session[6829]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:18.416000 audit[6829]: USER_END pid=6829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:18.435564 systemd-logind[1619]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:30:18.416000 audit[6829]: CRED_DISP pid=6829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:18.436956 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:47488.service: Deactivated successfully. Jan 20 02:30:18.464478 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:30:18.501782 systemd-logind[1619]: Removed session 20. Jan 20 02:30:18.548745 kernel: audit: type=1106 audit(1768876218.416:844): pid=6829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:18.549026 kernel: audit: type=1104 audit(1768876218.416:845): pid=6829 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:18.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.97:22-10.0.0.1:47488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:18.599231 kubelet[3053]: E0120 02:30:18.598605 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:30:18.614239 kubelet[3053]: E0120 02:30:18.608874 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:30:23.579865 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:23.579988 kernel: audit: type=1130 audit(1768876223.541:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.97:22-10.0.0.1:47514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:23.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.97:22-10.0.0.1:47514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:23.542081 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:47514.service - OpenSSH per-connection server daemon (10.0.0.1:47514). Jan 20 02:30:24.021000 audit[6847]: USER_ACCT pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.046414 sshd[6847]: Accepted publickey for core from 10.0.0.1 port 47514 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:24.057612 sshd-session[6847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:24.100967 kernel: audit: type=1101 audit(1768876224.021:848): pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.101716 kernel: audit: type=1103 audit(1768876224.029:849): pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.029000 audit[6847]: CRED_ACQ pid=6847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.111316 systemd-logind[1619]: New session 21 of user core. Jan 20 02:30:24.176618 kernel: audit: type=1006 audit(1768876224.029:850): pid=6847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 02:30:24.029000 audit[6847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec6959c50 a2=3 a3=0 items=0 ppid=1 pid=6847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:24.263960 kernel: audit: type=1300 audit(1768876224.029:850): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec6959c50 a2=3 a3=0 items=0 ppid=1 pid=6847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:24.272698 kernel: audit: type=1327 audit(1768876224.029:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:24.029000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:24.309784 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:30:24.330000 audit[6847]: USER_START pid=6847 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.367646 kernel: audit: type=1105 audit(1768876224.330:851): pid=6847 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.348000 audit[6851]: CRED_ACQ pid=6851 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.404664 kernel: audit: type=1103 audit(1768876224.348:852): pid=6851 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.906589 sshd[6851]: Connection closed by 10.0.0.1 port 47514 Jan 20 02:30:24.905200 sshd-session[6847]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:24.912000 audit[6847]: USER_END pid=6847 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:25.010747 kernel: audit: type=1106 audit(1768876224.912:853): pid=6847 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.934111 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:47514.service: Deactivated successfully. Jan 20 02:30:24.941864 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:30:24.912000 audit[6847]: CRED_DISP pid=6847 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:25.052423 systemd-logind[1619]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:30:25.061615 kernel: audit: type=1104 audit(1768876224.912:854): pid=6847 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:24.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.97:22-10.0.0.1:47514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:25.102660 systemd-logind[1619]: Removed session 21. Jan 20 02:30:26.577629 kubelet[3053]: E0120 02:30:26.576651 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:30:27.563789 kubelet[3053]: E0120 02:30:27.563269 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:30:28.752099 kubelet[3053]: E0120 02:30:28.751368 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:30:30.013993 kubelet[3053]: E0120 02:30:30.013390 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:30:30.023467 kubelet[3053]: E0120 02:30:30.020221 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:30:30.025409 kubelet[3053]: E0120 02:30:30.023801 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:30.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.97:22-10.0.0.1:44066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:30.215256 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:44066.service - OpenSSH per-connection server daemon (10.0.0.1:44066). Jan 20 02:30:30.303916 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:30.304121 kernel: audit: type=1130 audit(1768876230.214:856): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.97:22-10.0.0.1:44066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:31.119691 kernel: audit: type=1101 audit(1768876231.106:857): pid=6867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.106000 audit[6867]: USER_ACCT pid=6867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.115183 sshd-session[6867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:31.120580 sshd[6867]: Accepted publickey for core from 10.0.0.1 port 44066 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:31.107000 audit[6867]: CRED_ACQ pid=6867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.162500 kernel: audit: type=1103 audit(1768876231.107:858): pid=6867 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.138159 systemd-logind[1619]: New session 22 of user core. Jan 20 02:30:31.171943 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:30:31.178605 kernel: audit: type=1006 audit(1768876231.107:859): pid=6867 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 02:30:31.107000 audit[6867]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe71d3b7b0 a2=3 a3=0 items=0 ppid=1 pid=6867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:31.212382 kernel: audit: type=1300 audit(1768876231.107:859): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe71d3b7b0 a2=3 a3=0 items=0 ppid=1 pid=6867 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:31.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:31.218740 kernel: audit: type=1327 audit(1768876231.107:859): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:31.211000 audit[6867]: USER_START pid=6867 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.225000 audit[6873]: CRED_ACQ pid=6873 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.275428 kernel: audit: type=1105 audit(1768876231.211:860): pid=6867 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:31.275634 kernel: audit: type=1103 audit(1768876231.225:861): pid=6873 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:32.372421 sshd[6873]: Connection closed by 10.0.0.1 port 44066 Jan 20 02:30:32.380760 sshd-session[6867]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:32.504000 audit[6867]: USER_END pid=6867 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:32.558609 kernel: audit: type=1106 audit(1768876232.504:862): pid=6867 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:32.676337 kernel: audit: type=1104 audit(1768876232.535:863): pid=6867 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:32.535000 audit[6867]: CRED_DISP pid=6867 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:32.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.97:22-10.0.0.1:44066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:32.613651 systemd-logind[1619]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:30:32.622462 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:44066.service: Deactivated successfully. Jan 20 02:30:32.643377 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:30:32.818061 systemd-logind[1619]: Removed session 22. Jan 20 02:30:32.868968 kubelet[3053]: E0120 02:30:32.805513 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:30:37.467426 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:43342.service - OpenSSH per-connection server daemon (10.0.0.1:43342). Jan 20 02:30:37.484095 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:37.484355 kernel: audit: type=1130 audit(1768876237.467:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.97:22-10.0.0.1:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:37.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.97:22-10.0.0.1:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:38.529000 audit[6887]: USER_ACCT pid=6887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:38.557927 sshd[6887]: Accepted publickey for core from 10.0.0.1 port 43342 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:38.628984 sshd-session[6887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:38.610000 audit[6887]: CRED_ACQ pid=6887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:38.696964 kernel: audit: type=1101 audit(1768876238.529:866): pid=6887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:38.697148 kernel: audit: type=1103 audit(1768876238.610:867): pid=6887 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:38.697193 kernel: audit: type=1006 audit(1768876238.610:868): pid=6887 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 02:30:38.736037 kernel: audit: type=1300 audit(1768876238.610:868): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff27038790 a2=3 a3=0 items=0 ppid=1 pid=6887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:38.610000 audit[6887]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff27038790 a2=3 a3=0 items=0 ppid=1 pid=6887 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:38.747097 systemd-logind[1619]: New session 23 of user core. Jan 20 02:30:38.822512 kernel: audit: type=1327 audit(1768876238.610:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:38.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:38.969582 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:30:39.268735 kernel: audit: type=1105 audit(1768876239.161:869): pid=6887 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:39.161000 audit[6887]: USER_START pid=6887 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:39.227000 audit[6916]: CRED_ACQ pid=6916 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:39.326723 kernel: audit: type=1103 audit(1768876239.227:870): pid=6916 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:40.539144 kubelet[3053]: E0120 02:30:40.538867 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:30:41.500346 sshd[6916]: Connection closed by 10.0.0.1 port 43342 Jan 20 02:30:41.501903 sshd-session[6887]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:41.603501 kernel: audit: type=1106 audit(1768876241.502:871): pid=6887 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:41.502000 audit[6887]: USER_END pid=6887 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:41.603897 kubelet[3053]: E0120 02:30:41.568418 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:30:41.603897 kubelet[3053]: E0120 02:30:41.569473 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:41.609063 kubelet[3053]: E0120 02:30:41.609017 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:30:41.502000 audit[6887]: CRED_DISP pid=6887 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:41.622214 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:43342.service: Deactivated successfully. Jan 20 02:30:41.641232 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:30:41.696259 kernel: audit: type=1104 audit(1768876241.502:872): pid=6887 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:41.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.97:22-10.0.0.1:43342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:41.696899 systemd-logind[1619]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:30:41.723913 systemd-logind[1619]: Removed session 23. Jan 20 02:30:42.638479 kubelet[3053]: E0120 02:30:42.638387 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:30:42.731960 kubelet[3053]: E0120 02:30:42.726966 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:30:43.575738 kubelet[3053]: E0120 02:30:43.573153 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:30:46.521417 kubelet[3053]: E0120 02:30:46.518264 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:30:46.593428 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:59592.service - OpenSSH per-connection server daemon (10.0.0.1:59592). Jan 20 02:30:46.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.97:22-10.0.0.1:59592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:46.605004 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:46.605147 kernel: audit: type=1130 audit(1768876246.593:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.97:22-10.0.0.1:59592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:47.120000 audit[6931]: USER_ACCT pid=6931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.124231 sshd[6931]: Accepted publickey for core from 10.0.0.1 port 59592 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:47.173583 kernel: audit: type=1101 audit(1768876247.120:875): pid=6931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.136000 audit[6931]: CRED_ACQ pid=6931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.177484 sshd-session[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:47.215772 kernel: audit: type=1103 audit(1768876247.136:876): pid=6931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.234697 systemd-logind[1619]: New session 24 of user core. Jan 20 02:30:47.136000 audit[6931]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5b543bb0 a2=3 a3=0 items=0 ppid=1 pid=6931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:47.327276 kernel: audit: type=1006 audit(1768876247.136:877): pid=6931 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 02:30:47.327489 kernel: audit: type=1300 audit(1768876247.136:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5b543bb0 a2=3 a3=0 items=0 ppid=1 pid=6931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:47.339468 kernel: audit: type=1327 audit(1768876247.136:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:47.136000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:47.354512 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:30:47.374000 audit[6931]: USER_START pid=6931 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.403000 audit[6935]: CRED_ACQ pid=6935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.427052 kernel: audit: type=1105 audit(1768876247.374:878): pid=6931 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:47.427261 kernel: audit: type=1103 audit(1768876247.403:879): pid=6935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:48.032313 sshd[6935]: Connection closed by 10.0.0.1 port 59592 Jan 20 02:30:48.032180 sshd-session[6931]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:48.058000 audit[6931]: USER_END pid=6931 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:48.082586 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:59592.service: Deactivated successfully. Jan 20 02:30:48.098736 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:30:48.107987 kernel: audit: type=1106 audit(1768876248.058:880): pid=6931 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:48.058000 audit[6931]: CRED_DISP pid=6931 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:48.166671 kernel: audit: type=1104 audit(1768876248.058:881): pid=6931 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:48.165960 systemd-logind[1619]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:30:48.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.97:22-10.0.0.1:59592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:48.176176 systemd-logind[1619]: Removed session 24. Jan 20 02:30:52.537486 kubelet[3053]: E0120 02:30:52.530237 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:30:53.088327 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:53.088469 kernel: audit: type=1130 audit(1768876253.081:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.97:22-10.0.0.1:59616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:53.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.97:22-10.0.0.1:59616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:53.083485 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:59616.service - OpenSSH per-connection server daemon (10.0.0.1:59616). Jan 20 02:30:53.470793 sshd[6951]: Accepted publickey for core from 10.0.0.1 port 59616 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:30:53.466000 audit[6951]: USER_ACCT pid=6951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.486377 sshd-session[6951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:30:53.498607 kernel: audit: type=1101 audit(1768876253.466:884): pid=6951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.477000 audit[6951]: CRED_ACQ pid=6951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.536612 kernel: audit: type=1103 audit(1768876253.477:885): pid=6951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.556207 kernel: audit: type=1006 audit(1768876253.477:886): pid=6951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 02:30:53.551650 systemd-logind[1619]: New session 25 of user core. Jan 20 02:30:53.477000 audit[6951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee3c529b0 a2=3 a3=0 items=0 ppid=1 pid=6951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:53.562260 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:30:53.591943 kernel: audit: type=1300 audit(1768876253.477:886): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee3c529b0 a2=3 a3=0 items=0 ppid=1 pid=6951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:30:53.610103 kernel: audit: type=1327 audit(1768876253.477:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:53.477000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:30:53.579000 audit[6951]: USER_START pid=6951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.690629 kernel: audit: type=1105 audit(1768876253.579:887): pid=6951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.616000 audit[6955]: CRED_ACQ pid=6955 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:53.720379 kernel: audit: type=1103 audit(1768876253.616:888): pid=6955 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:54.577712 kubelet[3053]: E0120 02:30:54.575133 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:30:54.664336 sshd[6955]: Connection closed by 10.0.0.1 port 59616 Jan 20 02:30:54.661336 sshd-session[6951]: pam_unix(sshd:session): session closed for user core Jan 20 02:30:54.678000 audit[6951]: USER_END pid=6951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:54.737956 kernel: audit: type=1106 audit(1768876254.678:889): pid=6951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:54.738787 kernel: audit: type=1104 audit(1768876254.702:890): pid=6951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:54.702000 audit[6951]: CRED_DISP pid=6951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:30:54.728203 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:59616.service: Deactivated successfully. Jan 20 02:30:54.735400 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:30:54.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.97:22-10.0.0.1:59616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:54.774475 systemd-logind[1619]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:30:54.787912 systemd-logind[1619]: Removed session 25. Jan 20 02:30:55.558623 kubelet[3053]: E0120 02:30:55.552396 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:30:56.544787 kubelet[3053]: E0120 02:30:56.543137 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:30:57.586193 kubelet[3053]: E0120 02:30:57.584144 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:30:59.808147 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:43178.service - OpenSSH per-connection server daemon (10.0.0.1:43178). Jan 20 02:30:59.899418 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:30:59.899471 kernel: audit: type=1130 audit(1768876259.806:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.97:22-10.0.0.1:43178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:30:59.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.97:22-10.0.0.1:43178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:00.396767 containerd[1640]: time="2026-01-20T02:31:00.396656422Z" level=info msg="container event discarded" container=8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70 type=CONTAINER_CREATED_EVENT Jan 20 02:31:00.500626 sshd[6970]: Accepted publickey for core from 10.0.0.1 port 43178 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:00.498000 audit[6970]: USER_ACCT pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.518368 sshd-session[6970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:00.566057 kernel: audit: type=1101 audit(1768876260.498:893): pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.610345 systemd-logind[1619]: New session 26 of user core. Jan 20 02:31:00.514000 audit[6970]: CRED_ACQ pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.679826 kernel: audit: type=1103 audit(1768876260.514:894): pid=6970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.708993 kernel: audit: type=1006 audit(1768876260.514:895): pid=6970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 02:31:00.767750 kernel: audit: type=1300 audit(1768876260.514:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc529b8fb0 a2=3 a3=0 items=0 ppid=1 pid=6970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:00.514000 audit[6970]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc529b8fb0 a2=3 a3=0 items=0 ppid=1 pid=6970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:00.739787 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:31:00.514000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:00.803160 kernel: audit: type=1327 audit(1768876260.514:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:00.828000 audit[6970]: USER_START pid=6970 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.901016 kernel: audit: type=1105 audit(1768876260.828:896): pid=6970 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.939341 kernel: audit: type=1103 audit(1768876260.859:897): pid=6976 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:00.859000 audit[6976]: CRED_ACQ pid=6976 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:01.534037 kubelet[3053]: E0120 02:31:01.531893 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:31:01.715876 sshd[6976]: Connection closed by 10.0.0.1 port 43178 Jan 20 02:31:01.720848 sshd-session[6970]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:01.735000 audit[6970]: USER_END pid=6970 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:01.749081 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:43178.service: Deactivated successfully. Jan 20 02:31:01.765772 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:31:01.804827 kernel: audit: type=1106 audit(1768876261.735:898): pid=6970 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:01.808032 kernel: audit: type=1104 audit(1768876261.735:899): pid=6970 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:01.735000 audit[6970]: CRED_DISP pid=6970 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:01.797210 systemd-logind[1619]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:31:01.806041 systemd-logind[1619]: Removed session 26. Jan 20 02:31:01.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.97:22-10.0.0.1:43178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:03.033316 containerd[1640]: time="2026-01-20T02:31:03.033105436Z" level=info msg="container event discarded" container=8e85802aeee374b1de9dc1374fe0b54bb6ed7fe985d678e67e4a5852dc637d70 type=CONTAINER_STARTED_EVENT Jan 20 02:31:06.521964 kubelet[3053]: E0120 02:31:06.518795 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:31:06.863987 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Jan 20 02:31:06.927058 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:06.927188 kernel: audit: type=1130 audit(1768876266.862:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.97:22-10.0.0.1:58580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:06.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.97:22-10.0.0.1:58580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:07.250680 sshd[6990]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:07.248000 audit[6990]: USER_ACCT pid=6990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.267913 sshd-session[6990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:07.257000 audit[6990]: CRED_ACQ pid=6990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.321686 systemd-logind[1619]: New session 27 of user core. Jan 20 02:31:07.338017 kernel: audit: type=1101 audit(1768876267.248:902): pid=6990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.338188 kernel: audit: type=1103 audit(1768876267.257:903): pid=6990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.344611 kernel: audit: type=1006 audit(1768876267.257:904): pid=6990 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 02:31:07.357147 kernel: audit: type=1300 audit(1768876267.257:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3d8fc6e0 a2=3 a3=0 items=0 ppid=1 pid=6990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:07.257000 audit[6990]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3d8fc6e0 a2=3 a3=0 items=0 ppid=1 pid=6990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:07.257000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:07.363143 kernel: audit: type=1327 audit(1768876267.257:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:07.371502 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:31:07.405000 audit[6990]: USER_START pid=6990 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.430777 kernel: audit: type=1105 audit(1768876267.405:905): pid=6990 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.430000 audit[7001]: CRED_ACQ pid=7001 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:07.470035 kernel: audit: type=1103 audit(1768876267.430:906): pid=7001 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:08.162102 sshd[7001]: Connection closed by 10.0.0.1 port 58580 Jan 20 02:31:08.180156 sshd-session[6990]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:08.214000 audit[6990]: USER_END pid=6990 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:08.253250 systemd-logind[1619]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:31:08.262576 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:58580.service: Deactivated successfully. Jan 20 02:31:08.280633 kernel: audit: type=1106 audit(1768876268.214:907): pid=6990 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:08.214000 audit[6990]: CRED_DISP pid=6990 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:08.315716 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:31:08.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.97:22-10.0.0.1:58580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:08.346613 kernel: audit: type=1104 audit(1768876268.214:908): pid=6990 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:08.375579 systemd-logind[1619]: Removed session 27. Jan 20 02:31:08.592105 kubelet[3053]: E0120 02:31:08.582512 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:08.603075 kubelet[3053]: E0120 02:31:08.597818 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:31:08.630479 kubelet[3053]: E0120 02:31:08.611443 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:31:10.592332 kubelet[3053]: E0120 02:31:10.589149 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:31:10.636177 kubelet[3053]: E0120 02:31:10.602861 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:31:11.530169 kubelet[3053]: E0120 02:31:11.528246 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:13.231863 systemd[1]: Started sshd@26-10.0.0.97:22-10.0.0.1:58598.service - OpenSSH per-connection server daemon (10.0.0.1:58598). Jan 20 02:31:13.304060 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:13.304185 kernel: audit: type=1130 audit(1768876273.255:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.97:22-10.0.0.1:58598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:13.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.97:22-10.0.0.1:58598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:13.750000 audit[7056]: USER_ACCT pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:13.770804 sshd[7056]: Accepted publickey for core from 10.0.0.1 port 58598 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:13.775489 sshd-session[7056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:13.814628 kernel: audit: type=1101 audit(1768876273.750:911): pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:13.814782 kernel: audit: type=1103 audit(1768876273.755:912): pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:13.755000 audit[7056]: CRED_ACQ pid=7056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:13.838665 kernel: audit: type=1006 audit(1768876273.755:913): pid=7056 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 02:31:13.755000 audit[7056]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd9e44a80 a2=3 a3=0 items=0 ppid=1 pid=7056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:13.866621 kernel: audit: type=1300 audit(1768876273.755:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd9e44a80 a2=3 a3=0 items=0 ppid=1 pid=7056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:13.866794 kernel: audit: type=1327 audit(1768876273.755:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:13.755000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:13.946463 systemd-logind[1619]: New session 28 of user core. Jan 20 02:31:14.007051 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:31:14.068000 audit[7056]: USER_START pid=7056 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:14.197058 kernel: audit: type=1105 audit(1768876274.068:914): pid=7056 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:14.205000 audit[7060]: CRED_ACQ pid=7060 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:14.276705 kernel: audit: type=1103 audit(1768876274.205:915): pid=7060 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:15.537096 kubelet[3053]: E0120 02:31:15.532878 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:15.573823 sshd[7060]: Connection closed by 10.0.0.1 port 58598 Jan 20 02:31:15.578397 sshd-session[7056]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:15.583916 kubelet[3053]: E0120 02:31:15.582395 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:31:15.593000 audit[7056]: USER_END pid=7056 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:15.663198 systemd[1]: sshd@26-10.0.0.97:22-10.0.0.1:58598.service: Deactivated successfully. Jan 20 02:31:15.691220 kernel: audit: type=1106 audit(1768876275.593:916): pid=7056 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:15.612000 audit[7056]: CRED_DISP pid=7056 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:15.713436 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:31:15.740801 systemd-logind[1619]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:31:15.760675 systemd-logind[1619]: Removed session 28. Jan 20 02:31:15.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.97:22-10.0.0.1:58598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:15.770404 kernel: audit: type=1104 audit(1768876275.612:917): pid=7056 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:18.570948 kubelet[3053]: E0120 02:31:18.569618 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:18.593758 kubelet[3053]: E0120 02:31:18.583481 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:31:20.135712 containerd[1640]: time="2026-01-20T02:31:20.135628828Z" level=info msg="container event discarded" container=e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191 type=CONTAINER_CREATED_EVENT Jan 20 02:31:20.176375 containerd[1640]: time="2026-01-20T02:31:20.158401463Z" level=info msg="container event discarded" container=e780ef0a472f9fa2af0dc31b56840988a31cd4d68b8d0c7bca0d5aec28457191 type=CONTAINER_STARTED_EVENT Jan 20 02:31:20.528376 kubelet[3053]: E0120 02:31:20.521344 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:31:20.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.97:22-10.0.0.1:44052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:20.781411 systemd[1]: Started sshd@27-10.0.0.97:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). Jan 20 02:31:20.808623 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:20.808796 kernel: audit: type=1130 audit(1768876280.781:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.97:22-10.0.0.1:44052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:21.355144 sshd[7075]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:21.349000 audit[7075]: USER_ACCT pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.365738 sshd-session[7075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:21.402331 containerd[1640]: time="2026-01-20T02:31:21.402181116Z" level=info msg="container event discarded" container=a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672 type=CONTAINER_CREATED_EVENT Jan 20 02:31:21.402331 containerd[1640]: time="2026-01-20T02:31:21.402279128Z" level=info msg="container event discarded" container=a20d862f7346984194b20134781bfd856a1c78a2a1a116ddb20bd5f1c9e55672 type=CONTAINER_STARTED_EVENT Jan 20 02:31:21.412017 kernel: audit: type=1101 audit(1768876281.349:920): pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.353000 audit[7075]: CRED_ACQ pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.470428 kernel: audit: type=1103 audit(1768876281.353:921): pid=7075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.353000 audit[7075]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaa43b790 a2=3 a3=0 items=0 ppid=1 pid=7075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:21.502302 systemd-logind[1619]: New session 29 of user core. Jan 20 02:31:21.530371 kubelet[3053]: E0120 02:31:21.530268 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:31:21.532737 containerd[1640]: time="2026-01-20T02:31:21.532499791Z" level=info msg="container event discarded" container=2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b type=CONTAINER_CREATED_EVENT Jan 20 02:31:21.570207 kernel: audit: type=1006 audit(1768876281.353:922): pid=7075 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 20 02:31:21.570338 kernel: audit: type=1300 audit(1768876281.353:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcaa43b790 a2=3 a3=0 items=0 ppid=1 pid=7075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:21.570611 kernel: audit: type=1327 audit(1768876281.353:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:21.353000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:21.623109 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:31:21.670000 audit[7075]: USER_START pid=7075 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.733103 kernel: audit: type=1105 audit(1768876281.670:923): pid=7075 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.733245 kernel: audit: type=1103 audit(1768876281.701:924): pid=7079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.701000 audit[7079]: CRED_ACQ pid=7079 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:21.746228 containerd[1640]: time="2026-01-20T02:31:21.745809513Z" level=info msg="container event discarded" container=70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34 type=CONTAINER_CREATED_EVENT Jan 20 02:31:21.746962 containerd[1640]: time="2026-01-20T02:31:21.746899830Z" level=info msg="container event discarded" container=70b12bc0a50dbf36b2e535957733fc900f45974d5e5bafee02da4e5ae66b8e34 type=CONTAINER_STARTED_EVENT Jan 20 02:31:23.480712 sshd[7079]: Connection closed by 10.0.0.1 port 44052 Jan 20 02:31:23.480376 sshd-session[7075]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:23.610395 kernel: audit: type=1106 audit(1768876283.509:925): pid=7075 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:23.610603 kernel: audit: type=1104 audit(1768876283.509:926): pid=7075 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:23.509000 audit[7075]: USER_END pid=7075 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:23.509000 audit[7075]: CRED_DISP pid=7075 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:23.610866 kubelet[3053]: E0120 02:31:23.537177 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:31:23.560711 systemd[1]: sshd@27-10.0.0.97:22-10.0.0.1:44052.service: Deactivated successfully. Jan 20 02:31:23.598343 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:31:23.630852 systemd-logind[1619]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:31:23.661857 systemd-logind[1619]: Removed session 29. Jan 20 02:31:23.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.97:22-10.0.0.1:44052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:24.567356 containerd[1640]: time="2026-01-20T02:31:24.567283888Z" level=info msg="container event discarded" container=2589ed48a81a396c408f87301a5024416de46e8a010f2a34768f6e17ac6f4a2b type=CONTAINER_STARTED_EVENT Jan 20 02:31:24.577407 kubelet[3053]: E0120 02:31:24.577130 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:24.760582 containerd[1640]: time="2026-01-20T02:31:24.751299073Z" level=info msg="container event discarded" container=0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f type=CONTAINER_CREATED_EVENT Jan 20 02:31:24.760582 containerd[1640]: time="2026-01-20T02:31:24.751368942Z" level=info msg="container event discarded" container=0ebd9b3030659c71014812b38fe8674a7a015580573c35a0e8c179b2200a893f type=CONTAINER_STARTED_EVENT Jan 20 02:31:25.268715 containerd[1640]: time="2026-01-20T02:31:25.268607640Z" level=info msg="container event discarded" container=d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477 type=CONTAINER_CREATED_EVENT Jan 20 02:31:25.524501 kubelet[3053]: E0120 02:31:25.523399 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:31:27.848093 containerd[1640]: time="2026-01-20T02:31:27.847951796Z" level=info msg="container event discarded" container=d792fd1a4a4b89eb814bd11c0718f6f1a26790818636f27f627b765af1262477 type=CONTAINER_STARTED_EVENT Jan 20 02:31:28.229841 containerd[1640]: time="2026-01-20T02:31:28.229245236Z" level=info msg="container event discarded" container=20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f type=CONTAINER_CREATED_EVENT Jan 20 02:31:28.235893 containerd[1640]: time="2026-01-20T02:31:28.235753882Z" level=info msg="container event discarded" container=20b21f3c40bb176ed470bb2c96919a1a4fb742ba88c95448f44f2b10616a4f0f type=CONTAINER_STARTED_EVENT Jan 20 02:31:28.593482 systemd[1]: Started sshd@28-10.0.0.97:22-10.0.0.1:56908.service - OpenSSH per-connection server daemon (10.0.0.1:56908). Jan 20 02:31:28.662642 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:28.662822 kernel: audit: type=1130 audit(1768876288.592:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.97:22-10.0.0.1:56908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:28.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.97:22-10.0.0.1:56908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:28.765131 containerd[1640]: time="2026-01-20T02:31:28.763630688Z" level=info msg="container event discarded" container=3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd type=CONTAINER_CREATED_EVENT Jan 20 02:31:28.765131 containerd[1640]: time="2026-01-20T02:31:28.763714342Z" level=info msg="container event discarded" container=3b4d94e96645a9ba524a8f7cde3ad92b3ec6f99b9c721e73bc981824c34f48fd type=CONTAINER_STARTED_EVENT Jan 20 02:31:30.096601 kubelet[3053]: E0120 02:31:30.081906 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:31:30.191000 audit[7094]: USER_ACCT pid=7094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.226582 sshd[7094]: Accepted publickey for core from 10.0.0.1 port 56908 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:30.271406 sshd-session[7094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:30.245000 audit[7094]: CRED_ACQ pid=7094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.350816 systemd-logind[1619]: New session 30 of user core. Jan 20 02:31:30.386106 kernel: audit: type=1101 audit(1768876290.191:929): pid=7094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.386279 kernel: audit: type=1103 audit(1768876290.245:930): pid=7094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.386347 kernel: audit: type=1006 audit(1768876290.245:931): pid=7094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 20 02:31:30.428617 kernel: audit: type=1300 audit(1768876290.245:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf815b510 a2=3 a3=0 items=0 ppid=1 pid=7094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:30.245000 audit[7094]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf815b510 a2=3 a3=0 items=0 ppid=1 pid=7094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:30.500680 kernel: audit: type=1327 audit(1768876290.245:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:30.245000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:30.536733 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:31:30.580000 audit[7094]: USER_START pid=7094 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.663444 kernel: audit: type=1105 audit(1768876290.580:932): pid=7094 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.664978 containerd[1640]: time="2026-01-20T02:31:30.664876508Z" level=info msg="container event discarded" container=8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81 type=CONTAINER_CREATED_EVENT Jan 20 02:31:30.664978 containerd[1640]: time="2026-01-20T02:31:30.664943963Z" level=info msg="container event discarded" container=8fc37c908c5a7b4c2b6c73e8b332fd800fac3a2e01583ac7161a924cbf574d81 type=CONTAINER_STARTED_EVENT Jan 20 02:31:30.690000 audit[7103]: CRED_ACQ pid=7103 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:30.756118 kernel: audit: type=1103 audit(1768876290.690:933): pid=7103 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:31.538932 kubelet[3053]: E0120 02:31:31.537922 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:31:31.722000 audit[7094]: USER_END pid=7094 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:31.732117 sshd[7103]: Connection closed by 10.0.0.1 port 56908 Jan 20 02:31:31.716509 sshd-session[7094]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:31.787751 kernel: audit: type=1106 audit(1768876291.722:934): pid=7094 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:31.784000 audit[7094]: CRED_DISP pid=7094 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:31.836393 kernel: audit: type=1104 audit(1768876291.784:935): pid=7094 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:31.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.97:22-10.0.0.1:56908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:31.803126 systemd[1]: sshd@28-10.0.0.97:22-10.0.0.1:56908.service: Deactivated successfully. Jan 20 02:31:31.859504 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:31:31.924349 systemd-logind[1619]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:31:31.957601 systemd-logind[1619]: Removed session 30. Jan 20 02:31:32.518350 kubelet[3053]: E0120 02:31:32.517937 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:31:34.056707 containerd[1640]: time="2026-01-20T02:31:34.054602152Z" level=info msg="container event discarded" container=933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d type=CONTAINER_CREATED_EVENT Jan 20 02:31:34.056707 containerd[1640]: time="2026-01-20T02:31:34.055277408Z" level=info msg="container event discarded" container=933e114262b83c1ee69ae75b76959e7c156aaceb19a910794552dc00f08c1a1d type=CONTAINER_STARTED_EVENT Jan 20 02:31:34.534367 kubelet[3053]: E0120 02:31:34.531883 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:31:36.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.97:22-10.0.0.1:39780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:36.796184 systemd[1]: Started sshd@29-10.0.0.97:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). Jan 20 02:31:36.815483 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:36.815676 kernel: audit: type=1130 audit(1768876296.794:937): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.97:22-10.0.0.1:39780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:37.296000 audit[7117]: USER_ACCT pid=7117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.314428 sshd[7117]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:37.330766 sshd-session[7117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:37.397733 kernel: audit: type=1101 audit(1768876297.296:938): pid=7117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.413249 kernel: audit: type=1103 audit(1768876297.319:939): pid=7117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.319000 audit[7117]: CRED_ACQ pid=7117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.425425 systemd-logind[1619]: New session 31 of user core. Jan 20 02:31:37.457812 kernel: audit: type=1006 audit(1768876297.319:940): pid=7117 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 20 02:31:37.319000 audit[7117]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0239dcb0 a2=3 a3=0 items=0 ppid=1 pid=7117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:37.497683 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:31:37.578750 kernel: audit: type=1300 audit(1768876297.319:940): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0239dcb0 a2=3 a3=0 items=0 ppid=1 pid=7117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:37.319000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:37.631266 kernel: audit: type=1327 audit(1768876297.319:940): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:37.673000 audit[7117]: USER_START pid=7117 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.819254 kernel: audit: type=1105 audit(1768876297.673:941): pid=7117 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.819420 kernel: audit: type=1103 audit(1768876297.748:942): pid=7121 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:37.748000 audit[7121]: CRED_ACQ pid=7121 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:38.627724 kubelet[3053]: E0120 02:31:38.621626 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:31:38.637580 kubelet[3053]: E0120 02:31:38.628802 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:31:39.215817 sshd[7121]: Connection closed by 10.0.0.1 port 39780 Jan 20 02:31:39.220332 sshd-session[7117]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:39.239000 audit[7117]: USER_END pid=7117 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:39.299808 kernel: audit: type=1106 audit(1768876299.239:943): pid=7117 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:39.305711 systemd-logind[1619]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:31:39.244000 audit[7117]: CRED_DISP pid=7117 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:39.331674 systemd[1]: sshd@29-10.0.0.97:22-10.0.0.1:39780.service: Deactivated successfully. Jan 20 02:31:39.355748 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:31:39.374372 systemd-logind[1619]: Removed session 31. Jan 20 02:31:39.397962 kernel: audit: type=1104 audit(1768876299.244:944): pid=7117 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:39.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.97:22-10.0.0.1:39780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:42.533170 kubelet[3053]: E0120 02:31:42.529440 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:31:42.539227 kubelet[3053]: E0120 02:31:42.537338 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:31:44.390240 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:44.390416 kernel: audit: type=1130 audit(1768876304.371:946): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.97:22-10.0.0.1:39792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:44.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.97:22-10.0.0.1:39792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:44.372333 systemd[1]: Started sshd@30-10.0.0.97:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Jan 20 02:31:45.192589 sshd[7160]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:45.191000 audit[7160]: USER_ACCT pid=7160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.238394 sshd-session[7160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:45.285941 kernel: audit: type=1101 audit(1768876305.191:947): pid=7160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.215000 audit[7160]: CRED_ACQ pid=7160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.383650 kernel: audit: type=1103 audit(1768876305.215:948): pid=7160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.215000 audit[7160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd593728a0 a2=3 a3=0 items=0 ppid=1 pid=7160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:45.454729 systemd-logind[1619]: New session 32 of user core. Jan 20 02:31:45.511913 kernel: audit: type=1006 audit(1768876305.215:949): pid=7160 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 20 02:31:45.512093 kernel: audit: type=1300 audit(1768876305.215:949): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd593728a0 a2=3 a3=0 items=0 ppid=1 pid=7160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:45.512177 kernel: audit: type=1327 audit(1768876305.215:949): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:45.215000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:45.533778 kubelet[3053]: E0120 02:31:45.521498 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:31:45.669675 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:31:45.779000 audit[7160]: USER_START pid=7160 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.836153 kernel: audit: type=1105 audit(1768876305.779:950): pid=7160 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.837000 audit[7167]: CRED_ACQ pid=7167 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:45.930178 kernel: audit: type=1103 audit(1768876305.837:951): pid=7167 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:46.761749 sshd[7167]: Connection closed by 10.0.0.1 port 39792 Jan 20 02:31:46.810805 sshd-session[7160]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:46.811000 audit[7160]: USER_END pid=7160 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:46.830177 systemd[1]: sshd@30-10.0.0.97:22-10.0.0.1:39792.service: Deactivated successfully. Jan 20 02:31:46.858441 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:31:46.891360 systemd-logind[1619]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:31:46.811000 audit[7160]: CRED_DISP pid=7160 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:46.941921 systemd-logind[1619]: Removed session 32. Jan 20 02:31:46.954881 kernel: audit: type=1106 audit(1768876306.811:952): pid=7160 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:46.954965 kernel: audit: type=1104 audit(1768876306.811:953): pid=7160 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:46.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.97:22-10.0.0.1:39792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:47.532155 kubelet[3053]: E0120 02:31:47.520436 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:31:51.525584 kubelet[3053]: E0120 02:31:51.525202 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:51.590898 kubelet[3053]: E0120 02:31:51.590694 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:31:51.899339 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:31:51.899564 kernel: audit: type=1130 audit(1768876311.885:955): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.97:22-10.0.0.1:53708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:51.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.97:22-10.0.0.1:53708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:51.885851 systemd[1]: Started sshd@31-10.0.0.97:22-10.0.0.1:53708.service - OpenSSH per-connection server daemon (10.0.0.1:53708). Jan 20 02:31:52.540245 kubelet[3053]: E0120 02:31:52.540187 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:31:53.056980 sshd[7185]: Accepted publickey for core from 10.0.0.1 port 53708 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:31:53.054000 audit[7185]: USER_ACCT pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.084047 sshd-session[7185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:31:53.205405 kernel: audit: type=1101 audit(1768876313.054:956): pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.207493 kernel: audit: type=1103 audit(1768876313.071:957): pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.071000 audit[7185]: CRED_ACQ pid=7185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.332819 kernel: audit: type=1006 audit(1768876313.071:958): pid=7185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 20 02:31:53.332981 kernel: audit: type=1300 audit(1768876313.071:958): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7ded2110 a2=3 a3=0 items=0 ppid=1 pid=7185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:53.333027 kernel: audit: type=1327 audit(1768876313.071:958): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:53.071000 audit[7185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7ded2110 a2=3 a3=0 items=0 ppid=1 pid=7185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:31:53.071000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:31:53.368325 systemd-logind[1619]: New session 33 of user core. Jan 20 02:31:53.391997 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:31:53.626000 audit[7185]: USER_START pid=7185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.695267 kernel: audit: type=1105 audit(1768876313.626:959): pid=7185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.699000 audit[7191]: CRED_ACQ pid=7191 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.789587 kernel: audit: type=1103 audit(1768876313.699:960): pid=7191 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:53.844801 kubelet[3053]: E0120 02:31:53.840802 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:31:54.992440 sshd[7191]: Connection closed by 10.0.0.1 port 53708 Jan 20 02:31:54.995772 sshd-session[7185]: pam_unix(sshd:session): session closed for user core Jan 20 02:31:55.083000 audit[7185]: USER_END pid=7185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:55.127560 systemd[1]: sshd@31-10.0.0.97:22-10.0.0.1:53708.service: Deactivated successfully. Jan 20 02:31:55.163506 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:31:55.177152 systemd-logind[1619]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:31:55.215274 kernel: audit: type=1106 audit(1768876315.083:961): pid=7185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:55.215391 kernel: audit: type=1104 audit(1768876315.083:962): pid=7185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:55.083000 audit[7185]: CRED_DISP pid=7185 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:31:55.237480 systemd-logind[1619]: Removed session 33. Jan 20 02:31:55.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.97:22-10.0.0.1:53708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:31:56.673741 kubelet[3053]: E0120 02:31:56.666288 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:31:57.523203 kubelet[3053]: E0120 02:31:57.522973 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:31:58.589332 kubelet[3053]: E0120 02:31:58.522851 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:31:59.524978 kubelet[3053]: E0120 02:31:59.524456 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:32:00.117811 systemd[1]: Started sshd@32-10.0.0.97:22-10.0.0.1:58058.service - OpenSSH per-connection server daemon (10.0.0.1:58058). Jan 20 02:32:00.174828 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:32:00.175016 kernel: audit: type=1130 audit(1768876320.119:964): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.97:22-10.0.0.1:58058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:00.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.97:22-10.0.0.1:58058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:00.909301 sshd[7213]: Accepted publickey for core from 10.0.0.1 port 58058 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:00.904000 audit[7213]: USER_ACCT pid=7213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:00.973161 sshd-session[7213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:01.035405 kernel: audit: type=1101 audit(1768876320.904:965): pid=7213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:01.035651 kernel: audit: type=1103 audit(1768876320.934:966): pid=7213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:00.934000 audit[7213]: CRED_ACQ pid=7213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:01.123676 systemd-logind[1619]: New session 34 of user core. Jan 20 02:32:01.182136 kernel: audit: type=1006 audit(1768876320.950:967): pid=7213 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 20 02:32:01.182616 kernel: audit: type=1300 audit(1768876320.950:967): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcee4ee2a0 a2=3 a3=0 items=0 ppid=1 pid=7213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:00.950000 audit[7213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcee4ee2a0 a2=3 a3=0 items=0 ppid=1 pid=7213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:01.255988 kernel: audit: type=1327 audit(1768876320.950:967): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:00.950000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:01.302609 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:32:01.376000 audit[7213]: USER_START pid=7213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:01.560273 kernel: audit: type=1105 audit(1768876321.376:968): pid=7213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:01.560463 kernel: audit: type=1103 audit(1768876321.491:969): pid=7219 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:01.491000 audit[7219]: CRED_ACQ pid=7219 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:03.545587 sshd[7219]: Connection closed by 10.0.0.1 port 58058 Jan 20 02:32:03.539762 sshd-session[7213]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:03.550000 audit[7213]: USER_END pid=7213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:03.550000 audit[7213]: CRED_DISP pid=7213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:03.708019 systemd[1]: sshd@32-10.0.0.97:22-10.0.0.1:58058.service: Deactivated successfully. Jan 20 02:32:03.761398 kernel: audit: type=1106 audit(1768876323.550:970): pid=7213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:03.761622 kernel: audit: type=1104 audit(1768876323.550:971): pid=7213 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:03.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.97:22-10.0.0.1:58058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:03.767916 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:32:03.829167 systemd-logind[1619]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:32:03.864993 systemd-logind[1619]: Removed session 34. Jan 20 02:32:04.994194 kubelet[3053]: E0120 02:32:04.990440 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:32:06.541765 kubelet[3053]: E0120 02:32:06.536059 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:32:08.670397 kubelet[3053]: E0120 02:32:08.668897 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:08.714297 kubelet[3053]: E0120 02:32:08.710831 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:32:08.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.97:22-10.0.0.1:34394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:08.733405 systemd[1]: Started sshd@33-10.0.0.97:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Jan 20 02:32:08.823014 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:32:08.823162 kernel: audit: type=1130 audit(1768876328.733:973): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.97:22-10.0.0.1:34394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:09.570607 containerd[1640]: time="2026-01-20T02:32:09.566457679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:32:09.701000 audit[7259]: USER_ACCT pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:09.713970 sshd[7259]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:09.775282 sshd-session[7259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:09.798586 kernel: audit: type=1101 audit(1768876329.701:974): pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:09.745000 audit[7259]: CRED_ACQ pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:09.896268 kernel: audit: type=1103 audit(1768876329.745:975): pid=7259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:09.907633 containerd[1640]: time="2026-01-20T02:32:09.905666955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:09.975848 kernel: audit: type=1006 audit(1768876329.745:976): pid=7259 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 20 02:32:09.976026 kernel: audit: type=1300 audit(1768876329.745:976): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf93c4780 a2=3 a3=0 items=0 ppid=1 pid=7259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:09.745000 audit[7259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf93c4780 a2=3 a3=0 items=0 ppid=1 pid=7259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:09.993841 containerd[1640]: time="2026-01-20T02:32:09.990031444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:32:09.993841 containerd[1640]: time="2026-01-20T02:32:09.990236596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:09.993277 systemd-logind[1619]: New session 35 of user core. Jan 20 02:32:09.999806 kubelet[3053]: E0120 02:32:09.990434 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:32:09.999806 kubelet[3053]: E0120 02:32:09.990497 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:32:09.745000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:10.003648 kubelet[3053]: E0120 02:32:10.000862 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:10.022048 kernel: audit: type=1327 audit(1768876329.745:976): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:10.022269 kubelet[3053]: E0120 02:32:10.001001 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:32:10.033998 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:32:10.116000 audit[7259]: USER_START pid=7259 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:10.199000 audit[7263]: CRED_ACQ pid=7263 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:10.327793 kernel: audit: type=1105 audit(1768876330.116:977): pid=7259 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:10.332395 kernel: audit: type=1103 audit(1768876330.199:978): pid=7263 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:10.553200 containerd[1640]: time="2026-01-20T02:32:10.550484593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:32:10.747828 containerd[1640]: time="2026-01-20T02:32:10.747662262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:10.758498 containerd[1640]: time="2026-01-20T02:32:10.758422987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:32:10.758880 containerd[1640]: time="2026-01-20T02:32:10.758741198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:10.759207 kubelet[3053]: E0120 02:32:10.759158 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:32:10.759940 kubelet[3053]: E0120 02:32:10.759369 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:32:10.759940 kubelet[3053]: E0120 02:32:10.759483 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:10.766180 containerd[1640]: time="2026-01-20T02:32:10.764763337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:32:10.898377 containerd[1640]: time="2026-01-20T02:32:10.898311345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:10.926489 containerd[1640]: time="2026-01-20T02:32:10.925641397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:32:10.926489 containerd[1640]: time="2026-01-20T02:32:10.925785515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:10.926769 kubelet[3053]: E0120 02:32:10.925953 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:32:10.926769 kubelet[3053]: E0120 02:32:10.926011 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:32:10.926769 kubelet[3053]: E0120 02:32:10.926143 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:10.926769 kubelet[3053]: E0120 02:32:10.926201 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:32:11.576971 containerd[1640]: time="2026-01-20T02:32:11.572629240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:32:11.946834 sshd[7263]: Connection closed by 10.0.0.1 port 34394 Jan 20 02:32:11.957852 sshd-session[7259]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:12.003266 containerd[1640]: time="2026-01-20T02:32:12.001976672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:12.173801 kernel: audit: type=1106 audit(1768876332.014:979): pid=7259 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:12.014000 audit[7259]: USER_END pid=7259 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:12.091157 systemd[1]: sshd@33-10.0.0.97:22-10.0.0.1:34394.service: Deactivated successfully. Jan 20 02:32:12.176487 kubelet[3053]: E0120 02:32:12.063718 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:32:12.176487 kubelet[3053]: E0120 02:32:12.063797 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:32:12.176487 kubelet[3053]: E0120 02:32:12.063899 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:12.176487 kubelet[3053]: E0120 02:32:12.063963 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:32:12.189407 containerd[1640]: time="2026-01-20T02:32:12.040422960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:12.189407 containerd[1640]: time="2026-01-20T02:32:12.040722246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:32:12.014000 audit[7259]: CRED_DISP pid=7259 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:12.237705 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:32:12.276011 kernel: audit: type=1104 audit(1768876332.014:980): pid=7259 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:12.288160 systemd-logind[1619]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:32:12.297621 systemd-logind[1619]: Removed session 35. Jan 20 02:32:12.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.97:22-10.0.0.1:34394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:12.537348 kubelet[3053]: E0120 02:32:12.532122 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:17.023891 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:32:17.024135 kernel: audit: type=1130 audit(1768876337.010:982): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.97:22-10.0.0.1:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:17.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.97:22-10.0.0.1:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:17.014618 systemd[1]: Started sshd@34-10.0.0.97:22-10.0.0.1:48556.service - OpenSSH per-connection server daemon (10.0.0.1:48556). Jan 20 02:32:17.740210 sshd[7281]: Accepted publickey for core from 10.0.0.1 port 48556 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:17.730000 audit[7281]: USER_ACCT pid=7281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:17.778438 sshd-session[7281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:17.744000 audit[7281]: CRED_ACQ pid=7281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:17.872654 kernel: audit: type=1101 audit(1768876337.730:983): pid=7281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:17.872822 kernel: audit: type=1103 audit(1768876337.744:984): pid=7281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:17.872863 kernel: audit: type=1006 audit(1768876337.744:985): pid=7281 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 20 02:32:17.744000 audit[7281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcda3114a0 a2=3 a3=0 items=0 ppid=1 pid=7281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:17.921306 systemd-logind[1619]: New session 36 of user core. Jan 20 02:32:17.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:17.993990 kernel: audit: type=1300 audit(1768876337.744:985): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcda3114a0 a2=3 a3=0 items=0 ppid=1 pid=7281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:17.994190 kernel: audit: type=1327 audit(1768876337.744:985): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:18.023805 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:32:18.081000 audit[7281]: USER_START pid=7281 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:18.179666 kernel: audit: type=1105 audit(1768876338.081:986): pid=7281 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:18.179829 kernel: audit: type=1103 audit(1768876338.109:987): pid=7285 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:18.109000 audit[7285]: CRED_ACQ pid=7285 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:19.424576 sshd[7285]: Connection closed by 10.0.0.1 port 48556 Jan 20 02:32:19.427728 sshd-session[7281]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:19.439000 audit[7281]: USER_END pid=7281 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:19.541379 kernel: audit: type=1106 audit(1768876339.439:988): pid=7281 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:19.560680 containerd[1640]: time="2026-01-20T02:32:19.552061024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:32:19.619383 kernel: audit: type=1104 audit(1768876339.439:989): pid=7281 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:19.439000 audit[7281]: CRED_DISP pid=7281 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:19.596006 systemd[1]: sshd@34-10.0.0.97:22-10.0.0.1:48556.service: Deactivated successfully. Jan 20 02:32:19.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.97:22-10.0.0.1:48556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:19.630495 kubelet[3053]: E0120 02:32:19.568717 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:32:19.641929 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:32:19.689157 systemd-logind[1619]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:32:19.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.97:22-10.0.0.1:48578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:19.739477 systemd[1]: Started sshd@35-10.0.0.97:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). Jan 20 02:32:19.793247 systemd-logind[1619]: Removed session 36. Jan 20 02:32:19.951219 containerd[1640]: time="2026-01-20T02:32:19.938595419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:19.951219 containerd[1640]: time="2026-01-20T02:32:19.950904830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:32:19.951477 containerd[1640]: time="2026-01-20T02:32:19.951213435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:19.951589 kubelet[3053]: E0120 02:32:19.951486 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:32:19.951663 kubelet[3053]: E0120 02:32:19.951593 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:32:19.951709 kubelet[3053]: E0120 02:32:19.951678 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:19.951746 kubelet[3053]: E0120 02:32:19.951723 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:32:20.456000 audit[7300]: USER_ACCT pid=7300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:20.457707 sshd[7300]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:20.464000 audit[7300]: CRED_ACQ pid=7300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:20.464000 audit[7300]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc192ddfd0 a2=3 a3=0 items=0 ppid=1 pid=7300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:20.464000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:20.476997 sshd-session[7300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:20.546498 systemd-logind[1619]: New session 37 of user core. Jan 20 02:32:20.571981 kubelet[3053]: E0120 02:32:20.571744 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:20.598188 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:32:20.654000 audit[7300]: USER_START pid=7300 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:20.667000 audit[7304]: CRED_ACQ pid=7304 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:22.541681 kubelet[3053]: E0120 02:32:22.523784 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:32:22.577138 kubelet[3053]: E0120 02:32:22.576384 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:32:23.033369 sshd[7304]: Connection closed by 10.0.0.1 port 48578 Jan 20 02:32:23.030507 sshd-session[7300]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:23.114570 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 20 02:32:23.114929 kernel: audit: type=1106 audit(1768876343.080:997): pid=7300 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:23.080000 audit[7300]: USER_END pid=7300 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:23.112767 systemd[1]: sshd@35-10.0.0.97:22-10.0.0.1:48578.service: Deactivated successfully. Jan 20 02:32:23.145903 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:32:23.221234 kernel: audit: type=1104 audit(1768876343.080:998): pid=7300 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:23.080000 audit[7300]: CRED_DISP pid=7300 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:23.187008 systemd-logind[1619]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:32:23.231479 systemd[1]: Started sshd@36-10.0.0.97:22-10.0.0.1:48584.service - OpenSSH per-connection server daemon (10.0.0.1:48584). Jan 20 02:32:23.246310 kernel: audit: type=1131 audit(1768876343.127:999): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.97:22-10.0.0.1:48578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:23.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.97:22-10.0.0.1:48578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:23.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.97:22-10.0.0.1:48584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:23.299445 kernel: audit: type=1130 audit(1768876343.279:1000): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.97:22-10.0.0.1:48584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:23.378159 systemd-logind[1619]: Removed session 37. Jan 20 02:32:24.272000 audit[7317]: USER_ACCT pid=7317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.290454 sshd[7317]: Accepted publickey for core from 10.0.0.1 port 48584 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:24.325263 sshd-session[7317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:24.359227 kernel: audit: type=1101 audit(1768876344.272:1001): pid=7317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.298000 audit[7317]: CRED_ACQ pid=7317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.432597 kernel: audit: type=1103 audit(1768876344.298:1002): pid=7317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.430941 systemd-logind[1619]: New session 38 of user core. Jan 20 02:32:24.444286 kernel: audit: type=1006 audit(1768876344.298:1003): pid=7317 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 20 02:32:24.298000 audit[7317]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec77ab4a0 a2=3 a3=0 items=0 ppid=1 pid=7317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:24.473738 kernel: audit: type=1300 audit(1768876344.298:1003): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec77ab4a0 a2=3 a3=0 items=0 ppid=1 pid=7317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:24.298000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:24.483145 kernel: audit: type=1327 audit(1768876344.298:1003): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:24.484950 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:32:24.504000 audit[7317]: USER_START pid=7317 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.601438 containerd[1640]: time="2026-01-20T02:32:24.601253502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:32:24.648964 kernel: audit: type=1105 audit(1768876344.504:1004): pid=7317 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.517000 audit[7321]: CRED_ACQ pid=7321 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:24.855994 containerd[1640]: time="2026-01-20T02:32:24.855827276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:24.890907 containerd[1640]: time="2026-01-20T02:32:24.879502826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:32:24.890907 containerd[1640]: time="2026-01-20T02:32:24.890629029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:24.891629 kubelet[3053]: E0120 02:32:24.891582 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:32:24.912323 kubelet[3053]: E0120 02:32:24.905385 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:32:24.912663 kubelet[3053]: E0120 02:32:24.912609 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:24.919604 kubelet[3053]: E0120 02:32:24.919423 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:32:25.522381 kubelet[3053]: E0120 02:32:25.519010 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:32:26.118594 sshd[7321]: Connection closed by 10.0.0.1 port 48584 Jan 20 02:32:26.134349 sshd-session[7317]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:26.227000 audit[7317]: USER_END pid=7317 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:26.227000 audit[7317]: CRED_DISP pid=7317 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:26.369438 systemd[1]: sshd@36-10.0.0.97:22-10.0.0.1:48584.service: Deactivated successfully. Jan 20 02:32:26.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.97:22-10.0.0.1:48584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:26.439393 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:32:26.493229 systemd-logind[1619]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:32:26.527601 systemd-logind[1619]: Removed session 38. Jan 20 02:32:31.321911 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 20 02:32:31.323199 kernel: audit: type=1130 audit(1768876351.276:1009): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.97:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:31.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.97:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:31.278741 systemd[1]: Started sshd@37-10.0.0.97:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Jan 20 02:32:31.526306 kubelet[3053]: E0120 02:32:31.522297 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:32:32.117000 audit[7338]: USER_ACCT pid=7338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.129739 sshd[7338]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:32.144842 sshd-session[7338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:32.187610 kernel: audit: type=1101 audit(1768876352.117:1010): pid=7338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.137000 audit[7338]: CRED_ACQ pid=7338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.285312 kernel: audit: type=1103 audit(1768876352.137:1011): pid=7338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.137000 audit[7338]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebefb03a0 a2=3 a3=0 items=0 ppid=1 pid=7338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:32.369308 systemd-logind[1619]: New session 39 of user core. Jan 20 02:32:32.408660 kernel: audit: type=1006 audit(1768876352.137:1012): pid=7338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 20 02:32:32.408856 kernel: audit: type=1300 audit(1768876352.137:1012): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebefb03a0 a2=3 a3=0 items=0 ppid=1 pid=7338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:32.408915 kernel: audit: type=1327 audit(1768876352.137:1012): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:32.137000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:32.476001 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:32:32.595000 audit[7338]: USER_START pid=7338 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.647000 audit[7342]: CRED_ACQ pid=7342 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.773786 kernel: audit: type=1105 audit(1768876352.595:1013): pid=7338 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:32.773908 kernel: audit: type=1103 audit(1768876352.647:1014): pid=7342 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:33.602251 kubelet[3053]: E0120 02:32:33.574048 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:33.661573 containerd[1640]: time="2026-01-20T02:32:33.661335248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:32:33.867622 containerd[1640]: time="2026-01-20T02:32:33.865654213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:33.902343 containerd[1640]: time="2026-01-20T02:32:33.901377706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:32:33.902343 containerd[1640]: time="2026-01-20T02:32:33.901596031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:33.909699 kubelet[3053]: E0120 02:32:33.909449 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:32:33.909975 kubelet[3053]: E0120 02:32:33.909825 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:32:33.912977 kubelet[3053]: E0120 02:32:33.912941 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:33.992428 containerd[1640]: time="2026-01-20T02:32:33.992243319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:32:34.279065 containerd[1640]: time="2026-01-20T02:32:34.278722461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:32:34.319433 containerd[1640]: time="2026-01-20T02:32:34.319339656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:32:34.330820 containerd[1640]: time="2026-01-20T02:32:34.319435574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:32:34.331048 sshd[7342]: Connection closed by 10.0.0.1 port 41608 Jan 20 02:32:34.331588 kubelet[3053]: E0120 02:32:34.321911 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:32:34.331588 kubelet[3053]: E0120 02:32:34.321969 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:32:34.331588 kubelet[3053]: E0120 02:32:34.322059 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:32:34.348937 kubelet[3053]: E0120 02:32:34.331949 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:32:34.340837 sshd-session[7338]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:34.375000 audit[7338]: USER_END pid=7338 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:34.411942 systemd-logind[1619]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:32:34.423033 systemd[1]: sshd@37-10.0.0.97:22-10.0.0.1:41608.service: Deactivated successfully. Jan 20 02:32:34.430426 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:32:34.435903 systemd-logind[1619]: Removed session 39. Jan 20 02:32:34.478673 kernel: audit: type=1106 audit(1768876354.375:1015): pid=7338 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:34.375000 audit[7338]: CRED_DISP pid=7338 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:34.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.97:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:34.510667 kernel: audit: type=1104 audit(1768876354.375:1016): pid=7338 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:34.541356 kubelet[3053]: E0120 02:32:34.540691 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:32:36.666216 kubelet[3053]: E0120 02:32:36.640984 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:32:37.567947 kubelet[3053]: E0120 02:32:37.567129 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:32:38.601340 kubelet[3053]: E0120 02:32:38.600044 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:39.499947 systemd[1]: Started sshd@38-10.0.0.97:22-10.0.0.1:58514.service - OpenSSH per-connection server daemon (10.0.0.1:58514). Jan 20 02:32:39.524678 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:32:39.524843 kernel: audit: type=1130 audit(1768876359.499:1018): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.97:22-10.0.0.1:58514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:39.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.97:22-10.0.0.1:58514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:42.688405 kubelet[3053]: E0120 02:32:42.685282 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.888s" Jan 20 02:32:42.728316 kubelet[3053]: E0120 02:32:42.694892 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:32:43.623000 audit[7377]: USER_ACCT pid=7377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:43.679037 sshd[7377]: Accepted publickey for core from 10.0.0.1 port 58514 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:43.692276 kernel: audit: type=1101 audit(1768876363.623:1019): pid=7377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:43.689000 audit[7377]: CRED_ACQ pid=7377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:43.744009 sshd-session[7377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:43.812276 kernel: audit: type=1103 audit(1768876363.689:1020): pid=7377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:43.812415 kernel: audit: type=1006 audit(1768876363.694:1021): pid=7377 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 20 02:32:43.812475 kernel: audit: type=1300 audit(1768876363.694:1021): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf07904a0 a2=3 a3=0 items=0 ppid=1 pid=7377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:43.694000 audit[7377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf07904a0 a2=3 a3=0 items=0 ppid=1 pid=7377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:43.694000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:43.902312 kernel: audit: type=1327 audit(1768876363.694:1021): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:43.896775 systemd-logind[1619]: New session 40 of user core. Jan 20 02:32:43.959480 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:32:43.988000 audit[7377]: USER_START pid=7377 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.021636 kernel: audit: type=1105 audit(1768876363.988:1022): pid=7377 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.015000 audit[7389]: CRED_ACQ pid=7389 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.069847 kernel: audit: type=1103 audit(1768876364.015:1023): pid=7389 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.559217 kubelet[3053]: E0120 02:32:44.540792 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:32:44.919737 sshd[7389]: Connection closed by 10.0.0.1 port 58514 Jan 20 02:32:44.927075 sshd-session[7377]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:44.938000 audit[7377]: USER_END pid=7377 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.963181 systemd-logind[1619]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:32:44.978712 kernel: audit: type=1106 audit(1768876364.938:1024): pid=7377 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.938000 audit[7377]: CRED_DISP pid=7377 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:44.980094 systemd[1]: sshd@38-10.0.0.97:22-10.0.0.1:58514.service: Deactivated successfully. Jan 20 02:32:45.006919 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:32:45.018670 kernel: audit: type=1104 audit(1768876364.938:1025): pid=7377 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:45.018804 kernel: audit: type=1131 audit(1768876364.978:1026): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.97:22-10.0.0.1:58514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:44.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.97:22-10.0.0.1:58514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:45.048500 systemd-logind[1619]: Removed session 40. Jan 20 02:32:47.528390 kubelet[3053]: E0120 02:32:47.524090 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:32:47.558831 kubelet[3053]: E0120 02:32:47.554987 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:32:48.533786 kubelet[3053]: E0120 02:32:48.533621 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:32:48.627276 kubelet[3053]: E0120 02:32:48.626002 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:32:50.031912 systemd[1]: Started sshd@39-10.0.0.97:22-10.0.0.1:37750.service - OpenSSH per-connection server daemon (10.0.0.1:37750). Jan 20 02:32:50.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.97:22-10.0.0.1:37750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:50.094153 kernel: audit: type=1130 audit(1768876370.035:1027): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.97:22-10.0.0.1:37750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:50.597000 audit[7415]: USER_ACCT pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:50.614498 sshd[7415]: Accepted publickey for core from 10.0.0.1 port 37750 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:50.621224 sshd-session[7415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:50.678694 kernel: audit: type=1101 audit(1768876370.597:1028): pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:50.678835 kernel: audit: type=1103 audit(1768876370.600:1029): pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:50.600000 audit[7415]: CRED_ACQ pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:50.900034 kernel: audit: type=1006 audit(1768876370.606:1030): pid=7415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 20 02:32:50.606000 audit[7415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc49d501b0 a2=3 a3=0 items=0 ppid=1 pid=7415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:51.136585 kernel: audit: type=1300 audit(1768876370.606:1030): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc49d501b0 a2=3 a3=0 items=0 ppid=1 pid=7415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:51.136839 kernel: audit: type=1327 audit(1768876370.606:1030): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:50.606000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:51.314876 systemd-logind[1619]: New session 41 of user core. Jan 20 02:32:51.372820 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:32:51.647988 kernel: audit: type=1105 audit(1768876371.422:1031): pid=7415 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:52.489913 kernel: audit: type=1103 audit(1768876372.419:1032): pid=7421 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:51.422000 audit[7415]: USER_START pid=7415 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:52.419000 audit[7421]: CRED_ACQ pid=7421 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:52.632090 kubelet[3053]: E0120 02:32:52.593590 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:32:53.607881 sshd[7421]: Connection closed by 10.0.0.1 port 37750 Jan 20 02:32:53.615912 sshd-session[7415]: pam_unix(sshd:session): session closed for user core Jan 20 02:32:53.631000 audit[7415]: USER_END pid=7415 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:53.694761 systemd[1]: sshd@39-10.0.0.97:22-10.0.0.1:37750.service: Deactivated successfully. Jan 20 02:32:53.731784 kernel: audit: type=1106 audit(1768876373.631:1033): pid=7415 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:53.731980 kernel: audit: type=1104 audit(1768876373.631:1034): pid=7415 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:53.631000 audit[7415]: CRED_DISP pid=7415 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:53.739786 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:32:53.779588 systemd-logind[1619]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:32:53.782046 systemd-logind[1619]: Removed session 41. Jan 20 02:32:53.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.97:22-10.0.0.1:37750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:57.522930 kubelet[3053]: E0120 02:32:57.521285 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:32:58.519579 kubelet[3053]: E0120 02:32:58.519395 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:32:58.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.97:22-10.0.0.1:45146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:58.740498 systemd[1]: Started sshd@40-10.0.0.97:22-10.0.0.1:45146.service - OpenSSH per-connection server daemon (10.0.0.1:45146). Jan 20 02:32:58.772668 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:32:58.772949 kernel: audit: type=1130 audit(1768876378.739:1036): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.97:22-10.0.0.1:45146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:32:59.525270 kernel: audit: type=1101 audit(1768876379.430:1037): pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.430000 audit[7434]: USER_ACCT pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.467010 sshd-session[7434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:32:59.526361 sshd[7434]: Accepted publickey for core from 10.0.0.1 port 45146 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:32:59.542028 kubelet[3053]: E0120 02:32:59.535054 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:32:59.456000 audit[7434]: CRED_ACQ pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.609943 kernel: audit: type=1103 audit(1768876379.456:1038): pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.610114 kernel: audit: type=1006 audit(1768876379.456:1039): pid=7434 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 20 02:32:59.456000 audit[7434]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc207f22a0 a2=3 a3=0 items=0 ppid=1 pid=7434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:59.667961 systemd-logind[1619]: New session 42 of user core. Jan 20 02:32:59.695866 kernel: audit: type=1300 audit(1768876379.456:1039): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc207f22a0 a2=3 a3=0 items=0 ppid=1 pid=7434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:32:59.695990 kernel: audit: type=1327 audit(1768876379.456:1039): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:59.456000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:32:59.725465 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:32:59.778000 audit[7434]: USER_START pid=7434 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.875612 kernel: audit: type=1105 audit(1768876379.778:1040): pid=7434 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.908000 audit[7438]: CRED_ACQ pid=7438 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:32:59.983611 kernel: audit: type=1103 audit(1768876379.908:1041): pid=7438 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:00.571397 kubelet[3053]: E0120 02:33:00.568925 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:33:00.900902 sshd[7438]: Connection closed by 10.0.0.1 port 45146 Jan 20 02:33:00.904512 sshd-session[7434]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:00.941586 kernel: audit: type=1106 audit(1768876380.909:1042): pid=7434 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:00.909000 audit[7434]: USER_END pid=7434 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:00.929481 systemd[1]: sshd@40-10.0.0.97:22-10.0.0.1:45146.service: Deactivated successfully. Jan 20 02:33:00.933861 systemd-logind[1619]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:33:00.953687 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:33:00.983617 kernel: audit: type=1104 audit(1768876380.911:1043): pid=7434 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:00.911000 audit[7434]: CRED_DISP pid=7434 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:00.973715 systemd-logind[1619]: Removed session 42. Jan 20 02:33:00.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.97:22-10.0.0.1:45146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:01.608598 kubelet[3053]: E0120 02:33:01.606973 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:01.623709 kubelet[3053]: E0120 02:33:01.615408 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:33:02.602033 kubelet[3053]: E0120 02:33:02.598471 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:33:06.057733 systemd[1]: Started sshd@41-10.0.0.97:22-10.0.0.1:55470.service - OpenSSH per-connection server daemon (10.0.0.1:55470). Jan 20 02:33:06.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.97:22-10.0.0.1:55470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:06.078694 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:06.078794 kernel: audit: type=1130 audit(1768876386.068:1045): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.97:22-10.0.0.1:55470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:06.344000 audit[7454]: USER_ACCT pid=7454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.358354 sshd-session[7454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:06.370488 sshd[7454]: Accepted publickey for core from 10.0.0.1 port 55470 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:06.344000 audit[7454]: CRED_ACQ pid=7454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.389465 systemd-logind[1619]: New session 43 of user core. Jan 20 02:33:06.422303 kernel: audit: type=1101 audit(1768876386.344:1046): pid=7454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.422463 kernel: audit: type=1103 audit(1768876386.344:1047): pid=7454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.422503 kernel: audit: type=1006 audit(1768876386.344:1048): pid=7454 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 20 02:33:06.344000 audit[7454]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff64a55f10 a2=3 a3=0 items=0 ppid=1 pid=7454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:06.500463 kernel: audit: type=1300 audit(1768876386.344:1048): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff64a55f10 a2=3 a3=0 items=0 ppid=1 pid=7454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:06.502898 kernel: audit: type=1327 audit(1768876386.344:1048): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:06.344000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:06.503775 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:33:06.534000 audit[7454]: USER_START pid=7454 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.588000 audit[7458]: CRED_ACQ pid=7458 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.645041 kernel: audit: type=1105 audit(1768876386.534:1049): pid=7454 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:06.645505 kernel: audit: type=1103 audit(1768876386.588:1050): pid=7458 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:07.296137 sshd[7458]: Connection closed by 10.0.0.1 port 55470 Jan 20 02:33:07.297590 sshd-session[7454]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:07.313000 audit[7454]: USER_END pid=7454 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:07.370702 kernel: audit: type=1106 audit(1768876387.313:1051): pid=7454 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:07.355839 systemd[1]: sshd@41-10.0.0.97:22-10.0.0.1:55470.service: Deactivated successfully. Jan 20 02:33:07.313000 audit[7454]: CRED_DISP pid=7454 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:07.401424 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:33:07.435380 kernel: audit: type=1104 audit(1768876387.313:1052): pid=7454 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:07.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.97:22-10.0.0.1:55470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:07.448110 systemd-logind[1619]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:33:07.453917 systemd-logind[1619]: Removed session 43. Jan 20 02:33:10.581367 kubelet[3053]: E0120 02:33:10.569995 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:33:12.338620 systemd[1]: Started sshd@42-10.0.0.97:22-10.0.0.1:55520.service - OpenSSH per-connection server daemon (10.0.0.1:55520). Jan 20 02:33:12.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.97:22-10.0.0.1:55520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:12.346985 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:12.347136 kernel: audit: type=1130 audit(1768876392.337:1054): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.97:22-10.0.0.1:55520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:13.084557 sshd[7496]: Accepted publickey for core from 10.0.0.1 port 55520 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:13.079000 audit[7496]: USER_ACCT pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.104044 sshd-session[7496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:13.184400 kernel: audit: type=1101 audit(1768876393.079:1055): pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.184675 kernel: audit: type=1103 audit(1768876393.095:1056): pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.095000 audit[7496]: CRED_ACQ pid=7496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.226820 systemd-logind[1619]: New session 44 of user core. Jan 20 02:33:13.096000 audit[7496]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedf2d8e30 a2=3 a3=0 items=0 ppid=1 pid=7496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:13.350371 kernel: audit: type=1006 audit(1768876393.096:1057): pid=7496 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 20 02:33:13.350609 kernel: audit: type=1300 audit(1768876393.096:1057): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedf2d8e30 a2=3 a3=0 items=0 ppid=1 pid=7496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:13.350710 kernel: audit: type=1327 audit(1768876393.096:1057): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:13.096000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:13.387609 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:33:13.429000 audit[7496]: USER_START pid=7496 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.474186 kernel: audit: type=1105 audit(1768876393.429:1058): pid=7496 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.443000 audit[7500]: CRED_ACQ pid=7500 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.520212 kernel: audit: type=1103 audit(1768876393.443:1059): pid=7500 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:13.565376 kubelet[3053]: E0120 02:33:13.565325 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:33:13.573933 kubelet[3053]: E0120 02:33:13.571943 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:33:13.573933 kubelet[3053]: E0120 02:33:13.572327 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:33:14.166586 sshd[7500]: Connection closed by 10.0.0.1 port 55520 Jan 20 02:33:14.177912 sshd-session[7496]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:14.189000 audit[7496]: USER_END pid=7496 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:14.189000 audit[7496]: CRED_DISP pid=7496 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:14.298924 systemd[1]: sshd@42-10.0.0.97:22-10.0.0.1:55520.service: Deactivated successfully. Jan 20 02:33:14.304507 kernel: audit: type=1106 audit(1768876394.189:1060): pid=7496 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:14.308106 kernel: audit: type=1104 audit(1768876394.189:1061): pid=7496 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:14.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.97:22-10.0.0.1:55520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:14.307378 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:33:14.313356 systemd-logind[1619]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:33:14.317830 systemd-logind[1619]: Removed session 44. Jan 20 02:33:15.684009 kubelet[3053]: E0120 02:33:15.681775 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:33:15.684009 kubelet[3053]: E0120 02:33:15.682355 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:33:18.517582 kubelet[3053]: E0120 02:33:18.517337 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:19.356476 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:19.356687 kernel: audit: type=1130 audit(1768876399.335:1063): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.97:22-10.0.0.1:35770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:19.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.97:22-10.0.0.1:35770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:19.337807 systemd[1]: Started sshd@43-10.0.0.97:22-10.0.0.1:35770.service - OpenSSH per-connection server daemon (10.0.0.1:35770). Jan 20 02:33:19.933000 audit[7513]: USER_ACCT pid=7513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:19.966924 sshd[7513]: Accepted publickey for core from 10.0.0.1 port 35770 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:19.972896 sshd-session[7513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:19.995388 kernel: audit: type=1101 audit(1768876399.933:1064): pid=7513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:19.995592 kernel: audit: type=1103 audit(1768876399.946:1065): pid=7513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:19.946000 audit[7513]: CRED_ACQ pid=7513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:20.039998 systemd-logind[1619]: New session 45 of user core. Jan 20 02:33:20.055628 kernel: audit: type=1006 audit(1768876399.946:1066): pid=7513 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 20 02:33:19.946000 audit[7513]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc3727870 a2=3 a3=0 items=0 ppid=1 pid=7513 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:20.109720 kernel: audit: type=1300 audit(1768876399.946:1066): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc3727870 a2=3 a3=0 items=0 ppid=1 pid=7513 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:20.109914 kernel: audit: type=1327 audit(1768876399.946:1066): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:19.946000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:20.110313 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:33:20.161000 audit[7513]: USER_START pid=7513 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:20.289476 kernel: audit: type=1105 audit(1768876400.161:1067): pid=7513 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:20.289689 kernel: audit: type=1103 audit(1768876400.183:1068): pid=7517 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:20.183000 audit[7517]: CRED_ACQ pid=7517 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:20.522906 kubelet[3053]: E0120 02:33:20.521450 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:21.105609 sshd[7517]: Connection closed by 10.0.0.1 port 35770 Jan 20 02:33:21.108057 sshd-session[7513]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:21.121000 audit[7513]: USER_END pid=7513 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:21.149809 systemd[1]: sshd@43-10.0.0.97:22-10.0.0.1:35770.service: Deactivated successfully. Jan 20 02:33:21.178105 kernel: audit: type=1106 audit(1768876401.121:1069): pid=7513 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:21.180099 kernel: audit: type=1104 audit(1768876401.121:1070): pid=7513 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:21.121000 audit[7513]: CRED_DISP pid=7513 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:21.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.97:22-10.0.0.1:35770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:21.212231 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:33:21.240662 systemd-logind[1619]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:33:21.261638 systemd-logind[1619]: Removed session 45. Jan 20 02:33:21.516006 kubelet[3053]: E0120 02:33:21.515470 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:33:25.522986 kubelet[3053]: E0120 02:33:25.521877 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:33:26.223075 systemd[1]: Started sshd@44-10.0.0.97:22-10.0.0.1:38560.service - OpenSSH per-connection server daemon (10.0.0.1:38560). Jan 20 02:33:26.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.97:22-10.0.0.1:38560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:26.298775 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:26.300416 kernel: audit: type=1130 audit(1768876406.216:1072): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.97:22-10.0.0.1:38560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:26.579726 kubelet[3053]: E0120 02:33:26.573901 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:33:26.862000 audit[7531]: USER_ACCT pid=7531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:26.908286 kernel: audit: type=1101 audit(1768876406.862:1073): pid=7531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:26.913436 sshd[7531]: Accepted publickey for core from 10.0.0.1 port 38560 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:26.918000 audit[7531]: CRED_ACQ pid=7531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:26.925856 sshd-session[7531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:26.997809 kernel: audit: type=1103 audit(1768876406.918:1074): pid=7531 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:26.997976 kernel: audit: type=1006 audit(1768876406.919:1075): pid=7531 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 20 02:33:27.005923 kernel: audit: type=1300 audit(1768876406.919:1075): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf82cc030 a2=3 a3=0 items=0 ppid=1 pid=7531 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:26.919000 audit[7531]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf82cc030 a2=3 a3=0 items=0 ppid=1 pid=7531 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:27.000421 systemd-logind[1619]: New session 46 of user core. Jan 20 02:33:27.077010 kernel: audit: type=1327 audit(1768876406.919:1075): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:26.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:27.108283 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:33:27.152000 audit[7531]: USER_START pid=7531 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:27.255157 kernel: audit: type=1105 audit(1768876407.152:1076): pid=7531 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:27.263995 kernel: audit: type=1103 audit(1768876407.169:1077): pid=7535 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:27.169000 audit[7535]: CRED_ACQ pid=7535 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:28.180856 sshd[7535]: Connection closed by 10.0.0.1 port 38560 Jan 20 02:33:28.183966 sshd-session[7531]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:28.203000 audit[7531]: USER_END pid=7531 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:28.245222 systemd[1]: sshd@44-10.0.0.97:22-10.0.0.1:38560.service: Deactivated successfully. Jan 20 02:33:28.248638 systemd-logind[1619]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:33:28.305870 kernel: audit: type=1106 audit(1768876408.203:1078): pid=7531 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:28.306003 kernel: audit: type=1104 audit(1768876408.209:1079): pid=7531 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:28.209000 audit[7531]: CRED_DISP pid=7531 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:28.303350 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:33:28.326255 systemd-logind[1619]: Removed session 46. Jan 20 02:33:28.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.97:22-10.0.0.1:38560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:28.555150 kubelet[3053]: E0120 02:33:28.551774 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:33:29.547641 kubelet[3053]: E0120 02:33:29.542971 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:33:30.600027 kubelet[3053]: E0120 02:33:30.599910 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:33:33.326597 systemd[1]: Started sshd@45-10.0.0.97:22-10.0.0.1:38570.service - OpenSSH per-connection server daemon (10.0.0.1:38570). Jan 20 02:33:33.355082 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:33.355276 kernel: audit: type=1130 audit(1768876413.321:1081): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.97:22-10.0.0.1:38570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:33.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.97:22-10.0.0.1:38570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:34.044000 audit[7551]: USER_ACCT pid=7551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.072970 sshd-session[7551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:34.129054 kernel: audit: type=1101 audit(1768876414.044:1082): pid=7551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.129114 sshd[7551]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:34.062000 audit[7551]: CRED_ACQ pid=7551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.140133 systemd-logind[1619]: New session 47 of user core. Jan 20 02:33:34.233346 kernel: audit: type=1103 audit(1768876414.062:1083): pid=7551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.233466 kernel: audit: type=1006 audit(1768876414.062:1084): pid=7551 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Jan 20 02:33:34.062000 audit[7551]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc14a17710 a2=3 a3=0 items=0 ppid=1 pid=7551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:34.276073 kernel: audit: type=1300 audit(1768876414.062:1084): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc14a17710 a2=3 a3=0 items=0 ppid=1 pid=7551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:34.391348 kernel: audit: type=1327 audit(1768876414.062:1084): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:34.062000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:34.391958 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:33:34.453000 audit[7551]: USER_START pid=7551 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.573765 kernel: audit: type=1105 audit(1768876414.453:1085): pid=7551 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.573910 kernel: audit: type=1103 audit(1768876414.459:1086): pid=7555 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.459000 audit[7555]: CRED_ACQ pid=7555 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:34.783153 kubelet[3053]: E0120 02:33:34.783077 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:35.527087 kubelet[3053]: E0120 02:33:35.514568 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:35.623054 sshd[7555]: Connection closed by 10.0.0.1 port 38570 Jan 20 02:33:35.622388 sshd-session[7551]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:35.627000 audit[7551]: USER_END pid=7551 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:35.639248 systemd[1]: sshd@45-10.0.0.97:22-10.0.0.1:38570.service: Deactivated successfully. Jan 20 02:33:35.644775 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:33:35.648818 systemd-logind[1619]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:33:35.651447 systemd-logind[1619]: Removed session 47. Jan 20 02:33:35.664628 kernel: audit: type=1106 audit(1768876415.627:1087): pid=7551 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:35.664875 kernel: audit: type=1104 audit(1768876415.630:1088): pid=7551 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:35.630000 audit[7551]: CRED_DISP pid=7551 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:35.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.97:22-10.0.0.1:38570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:36.604587 kubelet[3053]: E0120 02:33:36.604134 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:33:38.529259 kubelet[3053]: E0120 02:33:38.528985 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:38.619144 kubelet[3053]: E0120 02:33:38.594901 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:33:38.621113 kubelet[3053]: E0120 02:33:38.621046 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:33:39.530280 kubelet[3053]: E0120 02:33:39.525488 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:33:40.589488 kubelet[3053]: E0120 02:33:40.589123 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:33:40.717922 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:40.718086 kernel: audit: type=1130 audit(1768876420.698:1090): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.97:22-10.0.0.1:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:40.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.97:22-10.0.0.1:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:40.699976 systemd[1]: Started sshd@46-10.0.0.97:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Jan 20 02:33:41.271000 audit[7595]: USER_ACCT pid=7595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.280235 sshd[7595]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:41.294154 sshd-session[7595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:41.419962 kernel: audit: type=1101 audit(1768876421.271:1091): pid=7595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.420126 kernel: audit: type=1103 audit(1768876421.287:1092): pid=7595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.287000 audit[7595]: CRED_ACQ pid=7595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.373749 systemd-logind[1619]: New session 48 of user core. Jan 20 02:33:41.287000 audit[7595]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc9c723c0 a2=3 a3=0 items=0 ppid=1 pid=7595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:41.535817 kernel: audit: type=1006 audit(1768876421.287:1093): pid=7595 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Jan 20 02:33:41.535981 kernel: audit: type=1300 audit(1768876421.287:1093): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc9c723c0 a2=3 a3=0 items=0 ppid=1 pid=7595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:41.536122 kubelet[3053]: E0120 02:33:41.525463 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:33:41.287000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:41.564610 kernel: audit: type=1327 audit(1768876421.287:1093): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:41.579581 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:33:41.683000 audit[7595]: USER_START pid=7595 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.725833 kernel: audit: type=1105 audit(1768876421.683:1094): pid=7595 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.725990 kernel: audit: type=1103 audit(1768876421.710:1095): pid=7599 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:41.710000 audit[7599]: CRED_ACQ pid=7599 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:42.297725 sshd[7599]: Connection closed by 10.0.0.1 port 47614 Jan 20 02:33:42.302888 sshd-session[7595]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:42.310000 audit[7595]: USER_END pid=7595 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:42.334000 audit[7595]: CRED_DISP pid=7595 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:42.471945 systemd[1]: sshd@46-10.0.0.97:22-10.0.0.1:47614.service: Deactivated successfully. Jan 20 02:33:42.503671 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:33:42.517765 kernel: audit: type=1106 audit(1768876422.310:1096): pid=7595 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:42.517921 kernel: audit: type=1104 audit(1768876422.334:1097): pid=7595 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:42.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.97:22-10.0.0.1:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:42.523697 systemd-logind[1619]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:33:42.527666 systemd-logind[1619]: Removed session 48. Jan 20 02:33:47.422429 systemd[1]: Started sshd@47-10.0.0.97:22-10.0.0.1:53526.service - OpenSSH per-connection server daemon (10.0.0.1:53526). Jan 20 02:33:47.440117 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:47.441981 kernel: audit: type=1130 audit(1768876427.420:1099): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.97:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:47.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.97:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:47.535293 kubelet[3053]: E0120 02:33:47.528837 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:33:47.966412 sshd[7612]: Accepted publickey for core from 10.0.0.1 port 53526 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:47.965000 audit[7612]: USER_ACCT pid=7612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:47.991695 sshd-session[7612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:48.033234 kernel: audit: type=1101 audit(1768876427.965:1100): pid=7612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:47.981000 audit[7612]: CRED_ACQ pid=7612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:48.042467 systemd-logind[1619]: New session 49 of user core. Jan 20 02:33:48.048391 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:33:48.110594 kernel: audit: type=1103 audit(1768876427.981:1101): pid=7612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:48.110742 kernel: audit: type=1006 audit(1768876427.981:1102): pid=7612 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jan 20 02:33:48.110817 kernel: audit: type=1300 audit(1768876427.981:1102): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd3c4d750 a2=3 a3=0 items=0 ppid=1 pid=7612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:47.981000 audit[7612]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd3c4d750 a2=3 a3=0 items=0 ppid=1 pid=7612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:48.198338 kernel: audit: type=1327 audit(1768876427.981:1102): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:47.981000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:48.114000 audit[7612]: USER_START pid=7612 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:48.294623 kernel: audit: type=1105 audit(1768876428.114:1103): pid=7612 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:48.130000 audit[7616]: CRED_ACQ pid=7616 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:48.368582 kernel: audit: type=1103 audit(1768876428.130:1104): pid=7616 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:49.763983 sshd[7616]: Connection closed by 10.0.0.1 port 53526 Jan 20 02:33:49.778875 sshd-session[7612]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:49.783000 audit[7612]: USER_END pid=7612 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:49.805899 systemd[1]: sshd@47-10.0.0.97:22-10.0.0.1:53526.service: Deactivated successfully. Jan 20 02:33:49.821148 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:33:49.822875 kernel: audit: type=1106 audit(1768876429.783:1105): pid=7612 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:49.791000 audit[7612]: CRED_DISP pid=7612 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:49.848213 systemd-logind[1619]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:33:49.856775 kernel: audit: type=1104 audit(1768876429.791:1106): pid=7612 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:49.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.97:22-10.0.0.1:53526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:49.877755 systemd-logind[1619]: Removed session 49. Jan 20 02:33:50.567913 kubelet[3053]: E0120 02:33:50.560738 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:33:52.547227 kubelet[3053]: E0120 02:33:52.537827 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:33:53.518643 kubelet[3053]: E0120 02:33:53.518061 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:54.530216 kubelet[3053]: E0120 02:33:54.528475 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:33:54.530216 kubelet[3053]: E0120 02:33:54.528963 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:33:54.868425 systemd[1]: Started sshd@48-10.0.0.97:22-10.0.0.1:46432.service - OpenSSH per-connection server daemon (10.0.0.1:46432). Jan 20 02:33:54.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.97:22-10.0.0.1:46432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:54.882106 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:33:54.882235 kernel: audit: type=1130 audit(1768876434.873:1108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.97:22-10.0.0.1:46432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:55.474755 sshd[7631]: Accepted publickey for core from 10.0.0.1 port 46432 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:33:55.471000 audit[7631]: USER_ACCT pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.528970 sshd-session[7631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:33:55.563563 kernel: audit: type=1101 audit(1768876435.471:1109): pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.506000 audit[7631]: CRED_ACQ pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.685398 kernel: audit: type=1103 audit(1768876435.506:1110): pid=7631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.685932 kernel: audit: type=1006 audit(1768876435.506:1111): pid=7631 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 20 02:33:55.686001 kernel: audit: type=1300 audit(1768876435.506:1111): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd276fb50 a2=3 a3=0 items=0 ppid=1 pid=7631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:55.506000 audit[7631]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd276fb50 a2=3 a3=0 items=0 ppid=1 pid=7631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:33:55.705744 systemd-logind[1619]: New session 50 of user core. Jan 20 02:33:55.756725 kernel: audit: type=1327 audit(1768876435.506:1111): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:55.506000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:33:55.787892 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:33:55.867000 audit[7631]: USER_START pid=7631 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.911122 kernel: audit: type=1105 audit(1768876435.867:1112): pid=7631 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.911316 kernel: audit: type=1103 audit(1768876435.889:1113): pid=7635 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:55.889000 audit[7635]: CRED_ACQ pid=7635 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:56.591775 kubelet[3053]: E0120 02:33:56.584952 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:33:56.766294 sshd[7635]: Connection closed by 10.0.0.1 port 46432 Jan 20 02:33:56.771489 sshd-session[7631]: pam_unix(sshd:session): session closed for user core Jan 20 02:33:56.782000 audit[7631]: USER_END pid=7631 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:56.822000 audit[7631]: CRED_DISP pid=7631 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:56.934910 systemd[1]: sshd@48-10.0.0.97:22-10.0.0.1:46432.service: Deactivated successfully. Jan 20 02:33:57.062824 kernel: audit: type=1106 audit(1768876436.782:1114): pid=7631 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:57.062978 kernel: audit: type=1104 audit(1768876436.822:1115): pid=7631 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:33:57.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.97:22-10.0.0.1:46432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:33:57.076801 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:33:57.122486 systemd-logind[1619]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:33:57.149419 systemd-logind[1619]: Removed session 50. Jan 20 02:33:58.607128 kubelet[3053]: E0120 02:33:58.607039 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:33:58.620156 kubelet[3053]: E0120 02:33:58.615012 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:34:01.533071 kubelet[3053]: E0120 02:34:01.531621 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:34:01.869711 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:01.869917 kernel: audit: type=1130 audit(1768876441.859:1117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.97:22-10.0.0.1:46434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:01.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.97:22-10.0.0.1:46434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:01.862738 systemd[1]: Started sshd@49-10.0.0.97:22-10.0.0.1:46434.service - OpenSSH per-connection server daemon (10.0.0.1:46434). Jan 20 02:34:02.263000 audit[7650]: USER_ACCT pid=7650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.300975 sshd-session[7650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:02.321316 sshd[7650]: Accepted publickey for core from 10.0.0.1 port 46434 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:02.370051 kernel: audit: type=1101 audit(1768876442.263:1118): pid=7650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.282000 audit[7650]: CRED_ACQ pid=7650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.422623 kernel: audit: type=1103 audit(1768876442.282:1119): pid=7650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.408129 systemd-logind[1619]: New session 51 of user core. Jan 20 02:34:02.290000 audit[7650]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff784db80 a2=3 a3=0 items=0 ppid=1 pid=7650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:02.556478 kernel: audit: type=1006 audit(1768876442.290:1120): pid=7650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 20 02:34:02.556726 kernel: audit: type=1300 audit(1768876442.290:1120): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff784db80 a2=3 a3=0 items=0 ppid=1 pid=7650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:02.290000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:02.597335 kernel: audit: type=1327 audit(1768876442.290:1120): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:02.597931 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:34:02.666000 audit[7650]: USER_START pid=7650 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.714715 kernel: audit: type=1105 audit(1768876442.666:1121): pid=7650 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.716000 audit[7654]: CRED_ACQ pid=7654 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:02.766055 kernel: audit: type=1103 audit(1768876442.716:1122): pid=7654 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:03.898329 sshd[7654]: Connection closed by 10.0.0.1 port 46434 Jan 20 02:34:03.888706 sshd-session[7650]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:03.925000 audit[7650]: USER_END pid=7650 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:03.999189 systemd[1]: sshd@49-10.0.0.97:22-10.0.0.1:46434.service: Deactivated successfully. Jan 20 02:34:03.925000 audit[7650]: CRED_DISP pid=7650 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:04.075297 kernel: audit: type=1106 audit(1768876443.925:1123): pid=7650 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:04.075481 kernel: audit: type=1104 audit(1768876443.925:1124): pid=7650 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:04.082185 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:34:03.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.97:22-10.0.0.1:46434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:04.138628 systemd-logind[1619]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:34:04.150893 systemd-logind[1619]: Removed session 51. Jan 20 02:34:05.526732 kubelet[3053]: E0120 02:34:05.520731 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:34:05.543732 kubelet[3053]: E0120 02:34:05.535110 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:34:07.601630 kubelet[3053]: E0120 02:34:07.593766 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:34:09.012254 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:09.012396 kernel: audit: type=1130 audit(1768876448.998:1126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.97:22-10.0.0.1:57144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:08.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.97:22-10.0.0.1:57144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:09.002383 systemd[1]: Started sshd@50-10.0.0.97:22-10.0.0.1:57144.service - OpenSSH per-connection server daemon (10.0.0.1:57144). Jan 20 02:34:09.292000 audit[7694]: USER_ACCT pid=7694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.315577 sshd-session[7694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:09.329899 sshd[7694]: Accepted publickey for core from 10.0.0.1 port 57144 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:09.393383 kernel: audit: type=1101 audit(1768876449.292:1127): pid=7694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.393622 kernel: audit: type=1103 audit(1768876449.303:1128): pid=7694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.303000 audit[7694]: CRED_ACQ pid=7694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.463803 kernel: audit: type=1006 audit(1768876449.303:1129): pid=7694 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 20 02:34:09.303000 audit[7694]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb87d2bf0 a2=3 a3=0 items=0 ppid=1 pid=7694 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:09.475157 systemd-logind[1619]: New session 52 of user core. Jan 20 02:34:09.517046 kubelet[3053]: E0120 02:34:09.516982 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:34:09.528057 kernel: audit: type=1300 audit(1768876449.303:1129): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb87d2bf0 a2=3 a3=0 items=0 ppid=1 pid=7694 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:09.534417 kernel: audit: type=1327 audit(1768876449.303:1129): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:09.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:09.559122 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:34:09.575000 audit[7694]: USER_START pid=7694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.660662 kernel: audit: type=1105 audit(1768876449.575:1130): pid=7694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.660855 kernel: audit: type=1103 audit(1768876449.595:1131): pid=7698 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:09.595000 audit[7698]: CRED_ACQ pid=7698 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:10.589871 sshd[7698]: Connection closed by 10.0.0.1 port 57144 Jan 20 02:34:10.603772 sshd-session[7694]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:10.621000 audit[7694]: USER_END pid=7694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:10.695969 systemd[1]: sshd@50-10.0.0.97:22-10.0.0.1:57144.service: Deactivated successfully. Jan 20 02:34:10.727585 kernel: audit: type=1106 audit(1768876450.621:1132): pid=7694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:10.719716 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:34:10.722602 systemd-logind[1619]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:34:10.621000 audit[7694]: CRED_DISP pid=7694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:10.736933 systemd-logind[1619]: Removed session 52. Jan 20 02:34:10.810126 kernel: audit: type=1104 audit(1768876450.621:1133): pid=7694 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:10.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.97:22-10.0.0.1:57144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:11.519317 kubelet[3053]: E0120 02:34:11.518955 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:34:15.519773 kubelet[3053]: E0120 02:34:15.517159 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:34:15.787934 systemd[1]: Started sshd@51-10.0.0.97:22-10.0.0.1:33326.service - OpenSSH per-connection server daemon (10.0.0.1:33326). Jan 20 02:34:15.837983 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:15.838149 kernel: audit: type=1130 audit(1768876455.783:1135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.97:22-10.0.0.1:33326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:15.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.97:22-10.0.0.1:33326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:16.568600 kernel: audit: type=1101 audit(1768876456.492:1136): pid=7719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.492000 audit[7719]: USER_ACCT pid=7719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.541659 sshd-session[7719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:16.569398 sshd[7719]: Accepted publickey for core from 10.0.0.1 port 33326 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:16.588956 kubelet[3053]: E0120 02:34:16.588188 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:34:16.506000 audit[7719]: CRED_ACQ pid=7719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.605309 systemd-logind[1619]: New session 53 of user core. Jan 20 02:34:16.657746 kernel: audit: type=1103 audit(1768876456.506:1137): pid=7719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.657937 kernel: audit: type=1006 audit(1768876456.513:1138): pid=7719 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 20 02:34:16.658004 kernel: audit: type=1300 audit(1768876456.513:1138): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad771370 a2=3 a3=0 items=0 ppid=1 pid=7719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:16.513000 audit[7719]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad771370 a2=3 a3=0 items=0 ppid=1 pid=7719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:16.688412 kernel: audit: type=1327 audit(1768876456.513:1138): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:16.513000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:16.713915 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:34:16.736000 audit[7719]: USER_START pid=7719 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.841071 kernel: audit: type=1105 audit(1768876456.736:1139): pid=7719 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.841260 kernel: audit: type=1103 audit(1768876456.783:1140): pid=7723 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:16.783000 audit[7723]: CRED_ACQ pid=7723 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:18.094470 sshd[7723]: Connection closed by 10.0.0.1 port 33326 Jan 20 02:34:18.097478 sshd-session[7719]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:18.152000 audit[7719]: USER_END pid=7719 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:18.200581 systemd-logind[1619]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:34:18.153000 audit[7719]: CRED_DISP pid=7719 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:18.237457 systemd[1]: sshd@51-10.0.0.97:22-10.0.0.1:33326.service: Deactivated successfully. Jan 20 02:34:18.284613 kernel: audit: type=1106 audit(1768876458.152:1141): pid=7719 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:18.284973 kernel: audit: type=1104 audit(1768876458.153:1142): pid=7719 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:18.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.97:22-10.0.0.1:33326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:18.283128 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:34:18.298678 systemd-logind[1619]: Removed session 53. Jan 20 02:34:18.678736 kubelet[3053]: E0120 02:34:18.673717 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:34:19.563841 kubelet[3053]: E0120 02:34:19.562888 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:34:22.591476 kubelet[3053]: E0120 02:34:22.590673 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:34:23.221086 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:23.221279 kernel: audit: type=1130 audit(1768876463.195:1144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.97:22-10.0.0.1:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:23.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.97:22-10.0.0.1:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:23.197496 systemd[1]: Started sshd@52-10.0.0.97:22-10.0.0.1:33342.service - OpenSSH per-connection server daemon (10.0.0.1:33342). Jan 20 02:34:23.521769 kubelet[3053]: E0120 02:34:23.513585 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:23.681469 sshd[7737]: Accepted publickey for core from 10.0.0.1 port 33342 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:23.680000 audit[7737]: USER_ACCT pid=7737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:23.690065 sshd-session[7737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:23.770050 kernel: audit: type=1101 audit(1768876463.680:1145): pid=7737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:23.770249 kernel: audit: type=1103 audit(1768876463.686:1146): pid=7737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:23.686000 audit[7737]: CRED_ACQ pid=7737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:23.786871 systemd-logind[1619]: New session 54 of user core. Jan 20 02:34:23.821993 kernel: audit: type=1006 audit(1768876463.686:1147): pid=7737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=54 res=1 Jan 20 02:34:23.686000 audit[7737]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccdeff230 a2=3 a3=0 items=0 ppid=1 pid=7737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:23.941427 kernel: audit: type=1300 audit(1768876463.686:1147): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccdeff230 a2=3 a3=0 items=0 ppid=1 pid=7737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:23.941602 kernel: audit: type=1327 audit(1768876463.686:1147): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:23.686000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:23.973208 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:34:24.009000 audit[7737]: USER_START pid=7737 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.075596 kernel: audit: type=1105 audit(1768876464.009:1148): pid=7737 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.035000 audit[7743]: CRED_ACQ pid=7743 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.113587 kernel: audit: type=1103 audit(1768876464.035:1149): pid=7743 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.535036 kubelet[3053]: E0120 02:34:24.528954 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:34:24.822702 sshd[7743]: Connection closed by 10.0.0.1 port 33342 Jan 20 02:34:24.824036 sshd-session[7737]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:24.853000 audit[7737]: USER_END pid=7737 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.878725 systemd[1]: sshd@52-10.0.0.97:22-10.0.0.1:33342.service: Deactivated successfully. Jan 20 02:34:24.896255 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:34:24.906493 kernel: audit: type=1106 audit(1768876464.853:1150): pid=7737 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.854000 audit[7737]: CRED_DISP pid=7737 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.958131 kernel: audit: type=1104 audit(1768876464.854:1151): pid=7737 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:24.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.97:22-10.0.0.1:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:24.911952 systemd-logind[1619]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:34:24.922626 systemd-logind[1619]: Removed session 54. Jan 20 02:34:27.568389 kubelet[3053]: E0120 02:34:27.566509 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:34:28.536164 kubelet[3053]: E0120 02:34:28.532858 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:34:29.563805 kubelet[3053]: E0120 02:34:29.532797 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:34:29.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.97:22-10.0.0.1:47038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:29.924948 systemd[1]: Started sshd@53-10.0.0.97:22-10.0.0.1:47038.service - OpenSSH per-connection server daemon (10.0.0.1:47038). Jan 20 02:34:29.962586 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:29.962772 kernel: audit: type=1130 audit(1768876469.919:1153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.97:22-10.0.0.1:47038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:30.641000 audit[7767]: USER_ACCT pid=7767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:30.726694 kernel: audit: type=1101 audit(1768876470.641:1154): pid=7767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:30.705336 sshd-session[7767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:30.732338 sshd[7767]: Accepted publickey for core from 10.0.0.1 port 47038 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:30.671000 audit[7767]: CRED_ACQ pid=7767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:30.838868 kernel: audit: type=1103 audit(1768876470.671:1155): pid=7767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:30.839005 kernel: audit: type=1006 audit(1768876470.671:1156): pid=7767 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Jan 20 02:34:30.671000 audit[7767]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1da8f100 a2=3 a3=0 items=0 ppid=1 pid=7767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:30.916754 systemd-logind[1619]: New session 55 of user core. Jan 20 02:34:30.974447 kernel: audit: type=1300 audit(1768876470.671:1156): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1da8f100 a2=3 a3=0 items=0 ppid=1 pid=7767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:30.978021 kernel: audit: type=1327 audit(1768876470.671:1156): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:30.671000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:31.011873 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:34:31.060000 audit[7767]: USER_START pid=7767 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:31.231596 kernel: audit: type=1105 audit(1768876471.060:1157): pid=7767 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:31.231784 kernel: audit: type=1103 audit(1768876471.141:1158): pid=7773 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:31.141000 audit[7773]: CRED_ACQ pid=7773 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:31.577730 kubelet[3053]: E0120 02:34:31.577616 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:32.605585 sshd[7773]: Connection closed by 10.0.0.1 port 47038 Jan 20 02:34:32.612483 sshd-session[7767]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:32.617000 audit[7767]: USER_END pid=7767 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:32.679696 systemd[1]: sshd@53-10.0.0.97:22-10.0.0.1:47038.service: Deactivated successfully. Jan 20 02:34:32.704948 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:34:32.725467 kernel: audit: type=1106 audit(1768876472.617:1159): pid=7767 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:32.718152 systemd-logind[1619]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:34:32.624000 audit[7767]: CRED_DISP pid=7767 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:32.759194 systemd-logind[1619]: Removed session 55. Jan 20 02:34:32.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.97:22-10.0.0.1:47038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:32.784790 kernel: audit: type=1104 audit(1768876472.624:1160): pid=7767 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:33.525505 kubelet[3053]: E0120 02:34:33.525392 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:34:33.544324 kubelet[3053]: E0120 02:34:33.540176 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:34:36.681270 kubelet[3053]: E0120 02:34:36.679649 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:34:37.739451 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:37.739759 kernel: audit: type=1130 audit(1768876477.722:1162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.97:22-10.0.0.1:44578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:37.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.97:22-10.0.0.1:44578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:37.726254 systemd[1]: Started sshd@54-10.0.0.97:22-10.0.0.1:44578.service - OpenSSH per-connection server daemon (10.0.0.1:44578). Jan 20 02:34:38.241000 audit[7787]: USER_ACCT pid=7787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.258832 sshd[7787]: Accepted publickey for core from 10.0.0.1 port 44578 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:38.283701 sshd-session[7787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:38.303268 kernel: audit: type=1101 audit(1768876478.241:1163): pid=7787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.254000 audit[7787]: CRED_ACQ pid=7787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.344438 systemd-logind[1619]: New session 56 of user core. Jan 20 02:34:38.376313 kernel: audit: type=1103 audit(1768876478.254:1164): pid=7787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.386911 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:34:38.429678 kernel: audit: type=1006 audit(1768876478.254:1165): pid=7787 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=56 res=1 Jan 20 02:34:38.429836 kernel: audit: type=1300 audit(1768876478.254:1165): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb68cd670 a2=3 a3=0 items=0 ppid=1 pid=7787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:38.254000 audit[7787]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb68cd670 a2=3 a3=0 items=0 ppid=1 pid=7787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:38.254000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:38.454269 kernel: audit: type=1327 audit(1768876478.254:1165): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:38.425000 audit[7787]: USER_START pid=7787 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.512973 kernel: audit: type=1105 audit(1768876478.425:1166): pid=7787 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.448000 audit[7811]: CRED_ACQ pid=7811 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:38.565614 kernel: audit: type=1103 audit(1768876478.448:1167): pid=7811 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:39.785416 sshd[7811]: Connection closed by 10.0.0.1 port 44578 Jan 20 02:34:39.793350 sshd-session[7787]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:39.946871 kernel: audit: type=1106 audit(1768876479.851:1168): pid=7787 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:39.851000 audit[7787]: USER_END pid=7787 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:39.902953 systemd[1]: sshd@54-10.0.0.97:22-10.0.0.1:44578.service: Deactivated successfully. Jan 20 02:34:39.930494 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:34:39.973916 systemd-logind[1619]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:34:39.852000 audit[7787]: CRED_DISP pid=7787 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:40.031070 kernel: audit: type=1104 audit(1768876479.852:1169): pid=7787 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:39.992846 systemd-logind[1619]: Removed session 56. Jan 20 02:34:39.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.97:22-10.0.0.1:44578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:40.588634 kubelet[3053]: E0120 02:34:40.588003 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:34:40.596920 kubelet[3053]: E0120 02:34:40.596866 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:34:41.533115 kubelet[3053]: E0120 02:34:41.523411 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:34:44.531768 kubelet[3053]: E0120 02:34:44.531303 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:34:44.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.97:22-10.0.0.1:36508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:44.842868 systemd[1]: Started sshd@55-10.0.0.97:22-10.0.0.1:36508.service - OpenSSH per-connection server daemon (10.0.0.1:36508). Jan 20 02:34:44.915591 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:44.915774 kernel: audit: type=1130 audit(1768876484.841:1171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.97:22-10.0.0.1:36508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:45.544100 sshd[7829]: Accepted publickey for core from 10.0.0.1 port 36508 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:45.536000 audit[7829]: USER_ACCT pid=7829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:45.629346 kernel: audit: type=1101 audit(1768876485.536:1172): pid=7829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:45.643572 kernel: audit: type=1103 audit(1768876485.614:1173): pid=7829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:45.614000 audit[7829]: CRED_ACQ pid=7829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:45.660356 sshd-session[7829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:45.722820 kernel: audit: type=1006 audit(1768876485.630:1174): pid=7829 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=57 res=1 Jan 20 02:34:45.630000 audit[7829]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f254420 a2=3 a3=0 items=0 ppid=1 pid=7829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:45.794449 kernel: audit: type=1300 audit(1768876485.630:1174): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f254420 a2=3 a3=0 items=0 ppid=1 pid=7829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:45.794660 kernel: audit: type=1327 audit(1768876485.630:1174): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:45.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:45.813912 systemd-logind[1619]: New session 57 of user core. Jan 20 02:34:45.883258 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:34:45.900000 audit[7829]: USER_START pid=7829 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:45.989595 kernel: audit: type=1105 audit(1768876485.900:1175): pid=7829 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:45.907000 audit[7833]: CRED_ACQ pid=7833 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:46.057613 kernel: audit: type=1103 audit(1768876485.907:1176): pid=7833 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:46.550428 kubelet[3053]: E0120 02:34:46.549109 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:46.954116 sshd[7833]: Connection closed by 10.0.0.1 port 36508 Jan 20 02:34:46.941843 sshd-session[7829]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:46.937000 audit[7829]: USER_END pid=7829 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:46.998200 systemd[1]: sshd@55-10.0.0.97:22-10.0.0.1:36508.service: Deactivated successfully. Jan 20 02:34:47.003983 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:34:47.024685 systemd-logind[1619]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:34:47.040042 systemd-logind[1619]: Removed session 57. Jan 20 02:34:46.937000 audit[7829]: CRED_DISP pid=7829 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:47.130634 kernel: audit: type=1106 audit(1768876486.937:1177): pid=7829 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:47.130801 kernel: audit: type=1104 audit(1768876486.937:1178): pid=7829 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:46.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.97:22-10.0.0.1:36508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:47.530634 kubelet[3053]: E0120 02:34:47.530328 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:34:48.565301 kubelet[3053]: E0120 02:34:48.555793 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:34:50.525904 kubelet[3053]: E0120 02:34:50.525270 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:51.555557 kubelet[3053]: E0120 02:34:51.551759 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:34:52.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.97:22-10.0.0.1:36514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:52.009096 systemd[1]: Started sshd@56-10.0.0.97:22-10.0.0.1:36514.service - OpenSSH per-connection server daemon (10.0.0.1:36514). Jan 20 02:34:52.070389 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:52.070689 kernel: audit: type=1130 audit(1768876492.011:1180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.97:22-10.0.0.1:36514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:52.482000 audit[7848]: USER_ACCT pid=7848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.509605 sshd-session[7848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:34:52.532723 kernel: audit: type=1101 audit(1768876492.482:1181): pid=7848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.532830 sshd[7848]: Accepted publickey for core from 10.0.0.1 port 36514 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:34:52.491000 audit[7848]: CRED_ACQ pid=7848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.596380 kernel: audit: type=1103 audit(1768876492.491:1182): pid=7848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.623017 systemd-logind[1619]: New session 58 of user core. Jan 20 02:34:52.625806 kernel: audit: type=1006 audit(1768876492.491:1183): pid=7848 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Jan 20 02:34:52.491000 audit[7848]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12ad2520 a2=3 a3=0 items=0 ppid=1 pid=7848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:52.491000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:52.723489 kernel: audit: type=1300 audit(1768876492.491:1183): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12ad2520 a2=3 a3=0 items=0 ppid=1 pid=7848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:34:52.723737 kernel: audit: type=1327 audit(1768876492.491:1183): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:34:52.736152 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:34:52.763000 audit[7848]: USER_START pid=7848 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.877073 kernel: audit: type=1105 audit(1768876492.763:1184): pid=7848 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.877274 kernel: audit: type=1103 audit(1768876492.784:1185): pid=7852 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:52.784000 audit[7852]: CRED_ACQ pid=7852 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:53.535278 kubelet[3053]: E0120 02:34:53.528301 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:34:53.556757 kubelet[3053]: E0120 02:34:53.556666 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:34:54.022376 sshd[7852]: Connection closed by 10.0.0.1 port 36514 Jan 20 02:34:54.021623 sshd-session[7848]: pam_unix(sshd:session): session closed for user core Jan 20 02:34:54.055000 audit[7848]: USER_END pid=7848 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:54.100902 systemd[1]: sshd@56-10.0.0.97:22-10.0.0.1:36514.service: Deactivated successfully. Jan 20 02:34:54.126936 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:34:54.141462 systemd-logind[1619]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:34:54.174382 kernel: audit: type=1106 audit(1768876494.055:1186): pid=7848 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:54.174620 kernel: audit: type=1104 audit(1768876494.055:1187): pid=7848 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:54.055000 audit[7848]: CRED_DISP pid=7848 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:34:54.205944 systemd-logind[1619]: Removed session 58. Jan 20 02:34:54.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.97:22-10.0.0.1:36514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:56.577092 kubelet[3053]: E0120 02:34:56.568727 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:34:57.522585 kubelet[3053]: E0120 02:34:57.515054 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:34:59.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.97:22-10.0.0.1:55758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:34:59.068388 systemd[1]: Started sshd@57-10.0.0.97:22-10.0.0.1:55758.service - OpenSSH per-connection server daemon (10.0.0.1:55758). Jan 20 02:34:59.096047 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:34:59.096254 kernel: audit: type=1130 audit(1768876499.065:1189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.97:22-10.0.0.1:55758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:00.102130 sshd[7877]: Accepted publickey for core from 10.0.0.1 port 55758 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:00.097648 sshd-session[7877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:00.080000 audit[7877]: USER_ACCT pid=7877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.179997 kernel: audit: type=1101 audit(1768876500.080:1190): pid=7877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.093000 audit[7877]: CRED_ACQ pid=7877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.229302 kernel: audit: type=1103 audit(1768876500.093:1191): pid=7877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.245481 systemd-logind[1619]: New session 59 of user core. Jan 20 02:35:00.335229 kernel: audit: type=1006 audit(1768876500.093:1192): pid=7877 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Jan 20 02:35:00.335399 kernel: audit: type=1300 audit(1768876500.093:1192): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2cc136c0 a2=3 a3=0 items=0 ppid=1 pid=7877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:00.093000 audit[7877]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2cc136c0 a2=3 a3=0 items=0 ppid=1 pid=7877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:00.093000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:00.358421 kernel: audit: type=1327 audit(1768876500.093:1192): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:00.376032 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:35:00.417000 audit[7877]: USER_START pid=7877 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.448000 audit[7883]: CRED_ACQ pid=7883 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.552426 kernel: audit: type=1105 audit(1768876500.417:1193): pid=7877 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.552650 kernel: audit: type=1103 audit(1768876500.448:1194): pid=7883 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:00.583006 kubelet[3053]: E0120 02:35:00.582059 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:35:01.603141 kubelet[3053]: E0120 02:35:01.602965 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:35:02.385375 sshd[7883]: Connection closed by 10.0.0.1 port 55758 Jan 20 02:35:02.381877 sshd-session[7877]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:02.405000 audit[7877]: USER_END pid=7877 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:02.447755 systemd[1]: sshd@57-10.0.0.97:22-10.0.0.1:55758.service: Deactivated successfully. Jan 20 02:35:02.491919 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:35:02.516967 systemd-logind[1619]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:35:02.405000 audit[7877]: CRED_DISP pid=7877 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:02.538655 systemd-logind[1619]: Removed session 59. Jan 20 02:35:02.589368 kernel: audit: type=1106 audit(1768876502.405:1195): pid=7877 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:02.589489 kernel: audit: type=1104 audit(1768876502.405:1196): pid=7877 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:02.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.97:22-10.0.0.1:55758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:05.525442 kubelet[3053]: E0120 02:35:05.523287 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:35:06.534918 kubelet[3053]: E0120 02:35:06.534683 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:06.580854 kubelet[3053]: E0120 02:35:06.580225 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:35:07.488488 systemd[1]: Started sshd@58-10.0.0.97:22-10.0.0.1:46160.service - OpenSSH per-connection server daemon (10.0.0.1:46160). Jan 20 02:35:07.581112 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:07.587005 kernel: audit: type=1130 audit(1768876507.491:1198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.97:22-10.0.0.1:46160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:07.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.97:22-10.0.0.1:46160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:07.608746 kubelet[3053]: E0120 02:35:07.608686 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:35:07.640280 kubelet[3053]: E0120 02:35:07.638681 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:35:07.972000 audit[7899]: USER_ACCT pid=7899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.013648 sshd-session[7899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:08.028572 kernel: audit: type=1101 audit(1768876507.972:1199): pid=7899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.028635 sshd[7899]: Accepted publickey for core from 10.0.0.1 port 46160 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:08.002000 audit[7899]: CRED_ACQ pid=7899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.058958 systemd-logind[1619]: New session 60 of user core. Jan 20 02:35:08.095669 kernel: audit: type=1103 audit(1768876508.002:1200): pid=7899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.002000 audit[7899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca87c6c00 a2=3 a3=0 items=0 ppid=1 pid=7899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:08.188317 kernel: audit: type=1006 audit(1768876508.002:1201): pid=7899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Jan 20 02:35:08.188676 kernel: audit: type=1300 audit(1768876508.002:1201): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca87c6c00 a2=3 a3=0 items=0 ppid=1 pid=7899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:08.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:08.196944 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:35:08.214057 kernel: audit: type=1327 audit(1768876508.002:1201): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:08.236000 audit[7899]: USER_START pid=7899 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.274000 audit[7928]: CRED_ACQ pid=7928 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.386792 kernel: audit: type=1105 audit(1768876508.236:1202): pid=7899 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:08.441373 kernel: audit: type=1103 audit(1768876508.274:1203): pid=7928 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:10.371760 sshd[7928]: Connection closed by 10.0.0.1 port 46160 Jan 20 02:35:10.382972 sshd-session[7899]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:10.549385 kernel: audit: type=1106 audit(1768876510.456:1204): pid=7899 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:10.456000 audit[7899]: USER_END pid=7899 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:10.595512 systemd[1]: sshd@58-10.0.0.97:22-10.0.0.1:46160.service: Deactivated successfully. Jan 20 02:35:10.613388 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:35:10.619288 systemd-logind[1619]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:35:10.685119 kernel: audit: type=1104 audit(1768876510.469:1205): pid=7899 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:10.469000 audit[7899]: CRED_DISP pid=7899 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:10.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.97:22-10.0.0.1:46160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:10.653126 systemd-logind[1619]: Removed session 60. Jan 20 02:35:15.494631 systemd[1]: Started sshd@59-10.0.0.97:22-10.0.0.1:33260.service - OpenSSH per-connection server daemon (10.0.0.1:33260). Jan 20 02:35:15.528693 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:15.528871 kernel: audit: type=1130 audit(1768876515.493:1207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.97:22-10.0.0.1:33260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:15.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.97:22-10.0.0.1:33260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:15.583937 kubelet[3053]: E0120 02:35:15.583869 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:35:15.656698 kubelet[3053]: E0120 02:35:15.584487 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:35:16.200382 sshd[7952]: Accepted publickey for core from 10.0.0.1 port 33260 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:16.178000 audit[7952]: USER_ACCT pid=7952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.221000 sshd-session[7952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:16.302840 kernel: audit: type=1101 audit(1768876516.178:1208): pid=7952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.185000 audit[7952]: CRED_ACQ pid=7952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.379356 kernel: audit: type=1103 audit(1768876516.185:1209): pid=7952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.379573 kernel: audit: type=1006 audit(1768876516.185:1210): pid=7952 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Jan 20 02:35:16.388700 systemd-logind[1619]: New session 61 of user core. Jan 20 02:35:16.185000 audit[7952]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff65bd3870 a2=3 a3=0 items=0 ppid=1 pid=7952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:16.497030 kernel: audit: type=1300 audit(1768876516.185:1210): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff65bd3870 a2=3 a3=0 items=0 ppid=1 pid=7952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:16.497243 kernel: audit: type=1327 audit(1768876516.185:1210): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:16.185000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:16.539895 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:35:16.554479 kubelet[3053]: E0120 02:35:16.541071 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:16.662000 audit[7952]: USER_START pid=7952 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.791643 kernel: audit: type=1105 audit(1768876516.662:1211): pid=7952 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.805000 audit[7956]: CRED_ACQ pid=7956 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:16.896019 kernel: audit: type=1103 audit(1768876516.805:1212): pid=7956 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:18.260100 sshd[7956]: Connection closed by 10.0.0.1 port 33260 Jan 20 02:35:18.259862 sshd-session[7952]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:18.268000 audit[7952]: USER_END pid=7952 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:18.289202 systemd[1]: sshd@59-10.0.0.97:22-10.0.0.1:33260.service: Deactivated successfully. Jan 20 02:35:18.268000 audit[7952]: CRED_DISP pid=7952 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:18.311756 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:35:18.328961 systemd-logind[1619]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:35:18.342569 kernel: audit: type=1106 audit(1768876518.268:1213): pid=7952 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:18.342636 kernel: audit: type=1104 audit(1768876518.268:1214): pid=7952 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:18.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.97:22-10.0.0.1:33260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:18.393885 systemd-logind[1619]: Removed session 61. Jan 20 02:35:18.555751 kubelet[3053]: E0120 02:35:18.555702 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:35:18.599510 kubelet[3053]: E0120 02:35:18.599056 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:35:20.571185 kubelet[3053]: E0120 02:35:20.571086 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:35:20.661083 kubelet[3053]: E0120 02:35:20.660910 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:35:21.524287 kubelet[3053]: E0120 02:35:21.522298 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:23.378946 systemd[1]: Started sshd@60-10.0.0.97:22-10.0.0.1:33262.service - OpenSSH per-connection server daemon (10.0.0.1:33262). Jan 20 02:35:23.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.97:22-10.0.0.1:33262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:23.403407 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:23.403510 kernel: audit: type=1130 audit(1768876523.372:1216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.97:22-10.0.0.1:33262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:24.040000 audit[7970]: USER_ACCT pid=7970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.054654 sshd[7970]: Accepted publickey for core from 10.0.0.1 port 33262 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:24.061017 sshd-session[7970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:24.082030 kernel: audit: type=1101 audit(1768876524.040:1217): pid=7970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.082188 kernel: audit: type=1103 audit(1768876524.053:1218): pid=7970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.053000 audit[7970]: CRED_ACQ pid=7970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.088638 systemd-logind[1619]: New session 62 of user core. Jan 20 02:35:24.053000 audit[7970]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7f8938e0 a2=3 a3=0 items=0 ppid=1 pid=7970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:24.180671 kernel: audit: type=1006 audit(1768876524.053:1219): pid=7970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Jan 20 02:35:24.180827 kernel: audit: type=1300 audit(1768876524.053:1219): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7f8938e0 a2=3 a3=0 items=0 ppid=1 pid=7970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:24.180875 kernel: audit: type=1327 audit(1768876524.053:1219): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:24.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:24.184479 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:35:24.245000 audit[7970]: USER_START pid=7970 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.330361 kernel: audit: type=1105 audit(1768876524.245:1220): pid=7970 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.291000 audit[7974]: CRED_ACQ pid=7974 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:24.407782 kernel: audit: type=1103 audit(1768876524.291:1221): pid=7974 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:26.136399 sshd[7974]: Connection closed by 10.0.0.1 port 33262 Jan 20 02:35:26.140896 sshd-session[7970]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:26.197000 audit[7970]: USER_END pid=7970 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:26.297695 kernel: audit: type=1106 audit(1768876526.197:1222): pid=7970 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:26.204000 audit[7970]: CRED_DISP pid=7970 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:26.300150 systemd[1]: sshd@60-10.0.0.97:22-10.0.0.1:33262.service: Deactivated successfully. Jan 20 02:35:26.359099 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:35:26.373314 kernel: audit: type=1104 audit(1768876526.204:1223): pid=7970 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:26.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.97:22-10.0.0.1:33262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:26.364612 systemd-logind[1619]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:35:26.388889 systemd-logind[1619]: Removed session 62. Jan 20 02:35:27.560043 kubelet[3053]: E0120 02:35:27.555800 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:35:30.580024 kubelet[3053]: E0120 02:35:30.566345 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:35:31.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.97:22-10.0.0.1:59286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:31.233726 systemd[1]: Started sshd@61-10.0.0.97:22-10.0.0.1:59286.service - OpenSSH per-connection server daemon (10.0.0.1:59286). Jan 20 02:35:31.249975 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:31.250048 kernel: audit: type=1130 audit(1768876531.232:1225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.97:22-10.0.0.1:59286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:31.579591 kubelet[3053]: E0120 02:35:31.575839 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:35:31.579591 kubelet[3053]: E0120 02:35:31.579429 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:35:32.282000 audit[7990]: USER_ACCT pid=7990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.307353 sshd[7990]: Accepted publickey for core from 10.0.0.1 port 59286 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:32.349871 kernel: audit: type=1101 audit(1768876532.282:1226): pid=7990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.350092 kernel: audit: type=1103 audit(1768876532.311:1227): pid=7990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.311000 audit[7990]: CRED_ACQ pid=7990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.323848 sshd-session[7990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:32.398094 kernel: audit: type=1006 audit(1768876532.311:1228): pid=7990 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=63 res=1 Jan 20 02:35:32.311000 audit[7990]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc87d5770 a2=3 a3=0 items=0 ppid=1 pid=7990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:32.490726 kernel: audit: type=1300 audit(1768876532.311:1228): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc87d5770 a2=3 a3=0 items=0 ppid=1 pid=7990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:32.454404 systemd-logind[1619]: New session 63 of user core. Jan 20 02:35:32.311000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:32.526631 kernel: audit: type=1327 audit(1768876532.311:1228): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:32.602733 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:35:32.691457 kubelet[3053]: E0120 02:35:32.691400 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:35:32.715000 audit[7990]: USER_START pid=7990 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.799825 kernel: audit: type=1105 audit(1768876532.715:1229): pid=7990 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.775000 audit[7994]: CRED_ACQ pid=7994 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:32.863316 kernel: audit: type=1103 audit(1768876532.775:1230): pid=7994 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:33.652002 kubelet[3053]: E0120 02:35:33.651945 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:35:34.432246 sshd[7994]: Connection closed by 10.0.0.1 port 59286 Jan 20 02:35:34.434868 sshd-session[7990]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:34.459000 audit[7990]: USER_END pid=7990 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:34.627211 kernel: audit: type=1106 audit(1768876534.459:1231): pid=7990 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:34.705088 kernel: audit: type=1104 audit(1768876534.468:1232): pid=7990 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:34.468000 audit[7990]: CRED_DISP pid=7990 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:34.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.97:22-10.0.0.1:59286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:34.686075 systemd[1]: sshd@61-10.0.0.97:22-10.0.0.1:59286.service: Deactivated successfully. Jan 20 02:35:34.737084 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:35:34.780344 systemd-logind[1619]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:35:34.807134 systemd-logind[1619]: Removed session 63. Jan 20 02:35:39.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.97:22-10.0.0.1:57234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:39.525026 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:39.525085 kernel: audit: type=1130 audit(1768876539.514:1234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.97:22-10.0.0.1:57234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:39.517405 systemd[1]: Started sshd@62-10.0.0.97:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). Jan 20 02:35:40.125000 audit[8035]: USER_ACCT pid=8035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.129447 sshd[8035]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:40.156883 sshd-session[8035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:40.145000 audit[8035]: CRED_ACQ pid=8035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.271594 kernel: audit: type=1101 audit(1768876540.125:1235): pid=8035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.271745 kernel: audit: type=1103 audit(1768876540.145:1236): pid=8035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.271818 kernel: audit: type=1006 audit(1768876540.145:1237): pid=8035 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=64 res=1 Jan 20 02:35:40.145000 audit[8035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc053a71c0 a2=3 a3=0 items=0 ppid=1 pid=8035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:40.376677 systemd-logind[1619]: New session 64 of user core. Jan 20 02:35:40.394969 kernel: audit: type=1300 audit(1768876540.145:1237): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc053a71c0 a2=3 a3=0 items=0 ppid=1 pid=8035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:40.395174 kernel: audit: type=1327 audit(1768876540.145:1237): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:40.145000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:40.455371 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 20 02:35:40.505000 audit[8035]: USER_START pid=8035 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.580631 kernel: audit: type=1105 audit(1768876540.505:1238): pid=8035 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.664342 kernel: audit: type=1103 audit(1768876540.542:1239): pid=8039 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:40.542000 audit[8039]: CRED_ACQ pid=8039 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:41.814657 sshd[8039]: Connection closed by 10.0.0.1 port 57234 Jan 20 02:35:41.813822 sshd-session[8035]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:41.824000 audit[8035]: USER_END pid=8035 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:41.854643 systemd-logind[1619]: Session 64 logged out. Waiting for processes to exit. Jan 20 02:35:41.868446 systemd[1]: sshd@62-10.0.0.97:22-10.0.0.1:57234.service: Deactivated successfully. Jan 20 02:35:41.884033 systemd[1]: session-64.scope: Deactivated successfully. Jan 20 02:35:41.921059 kernel: audit: type=1106 audit(1768876541.824:1240): pid=8035 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:41.921255 kernel: audit: type=1104 audit(1768876541.825:1241): pid=8035 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:41.825000 audit[8035]: CRED_DISP pid=8035 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:41.917325 systemd-logind[1619]: Removed session 64. Jan 20 02:35:41.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.97:22-10.0.0.1:57234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:42.538330 kubelet[3053]: E0120 02:35:42.535854 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:42.555832 kubelet[3053]: E0120 02:35:42.555377 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:35:43.519989 kubelet[3053]: E0120 02:35:43.516894 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:35:45.534429 kubelet[3053]: E0120 02:35:45.533359 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:35:45.643601 kubelet[3053]: E0120 02:35:45.643345 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:35:47.002363 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:47.002591 kernel: audit: type=1130 audit(1768876546.955:1243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.97:22-10.0.0.1:43142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:46.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.97:22-10.0.0.1:43142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:46.956053 systemd[1]: Started sshd@63-10.0.0.97:22-10.0.0.1:43142.service - OpenSSH per-connection server daemon (10.0.0.1:43142). Jan 20 02:35:47.530357 kubelet[3053]: E0120 02:35:47.528205 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:35:47.547810 kubelet[3053]: E0120 02:35:47.544492 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:35:47.637304 kernel: audit: type=1101 audit(1768876547.574:1244): pid=8061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:47.574000 audit[8061]: USER_ACCT pid=8061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:47.616581 sshd-session[8061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:47.642474 sshd[8061]: Accepted publickey for core from 10.0.0.1 port 43142 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:47.590000 audit[8061]: CRED_ACQ pid=8061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:47.768190 kernel: audit: type=1103 audit(1768876547.590:1245): pid=8061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:47.776932 systemd-logind[1619]: New session 65 of user core. Jan 20 02:35:47.811163 kernel: audit: type=1006 audit(1768876547.590:1246): pid=8061 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=65 res=1 Jan 20 02:35:47.590000 audit[8061]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde9146880 a2=3 a3=0 items=0 ppid=1 pid=8061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:47.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:47.867040 kernel: audit: type=1300 audit(1768876547.590:1246): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde9146880 a2=3 a3=0 items=0 ppid=1 pid=8061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:47.868279 kernel: audit: type=1327 audit(1768876547.590:1246): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:47.863754 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 20 02:35:47.894000 audit[8061]: USER_START pid=8061 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:48.021484 kernel: audit: type=1105 audit(1768876547.894:1247): pid=8061 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:48.021660 kernel: audit: type=1103 audit(1768876547.921:1248): pid=8065 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:47.921000 audit[8065]: CRED_ACQ pid=8065 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:49.100022 sshd[8065]: Connection closed by 10.0.0.1 port 43142 Jan 20 02:35:49.103842 sshd-session[8061]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:49.177000 audit[8061]: USER_END pid=8061 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:49.223876 systemd[1]: sshd@63-10.0.0.97:22-10.0.0.1:43142.service: Deactivated successfully. Jan 20 02:35:49.261670 systemd[1]: session-65.scope: Deactivated successfully. Jan 20 02:35:49.276164 kernel: audit: type=1106 audit(1768876549.177:1249): pid=8061 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:49.276316 kernel: audit: type=1104 audit(1768876549.177:1250): pid=8061 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:49.177000 audit[8061]: CRED_DISP pid=8061 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:49.285715 systemd-logind[1619]: Session 65 logged out. Waiting for processes to exit. Jan 20 02:35:49.305945 systemd-logind[1619]: Removed session 65. Jan 20 02:35:49.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-10.0.0.97:22-10.0.0.1:43142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:51.518452 kubelet[3053]: E0120 02:35:51.516327 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:53.557805 kubelet[3053]: E0120 02:35:53.557195 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:35:54.157816 systemd[1]: Started sshd@64-10.0.0.97:22-10.0.0.1:43150.service - OpenSSH per-connection server daemon (10.0.0.1:43150). Jan 20 02:35:54.201621 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:35:54.201786 kernel: audit: type=1130 audit(1768876554.153:1252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.97:22-10.0.0.1:43150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:54.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.97:22-10.0.0.1:43150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:54.927650 sshd[8083]: Accepted publickey for core from 10.0.0.1 port 43150 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:35:54.924000 audit[8083]: USER_ACCT pid=8083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:54.983596 sshd-session[8083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:35:55.015189 kernel: audit: type=1101 audit(1768876554.924:1253): pid=8083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:54.944000 audit[8083]: CRED_ACQ pid=8083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:55.106881 kernel: audit: type=1103 audit(1768876554.944:1254): pid=8083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:54.945000 audit[8083]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1cd1c8f0 a2=3 a3=0 items=0 ppid=1 pid=8083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:55.151184 systemd-logind[1619]: New session 66 of user core. Jan 20 02:35:55.197267 kernel: audit: type=1006 audit(1768876554.945:1255): pid=8083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=66 res=1 Jan 20 02:35:55.197462 kernel: audit: type=1300 audit(1768876554.945:1255): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1cd1c8f0 a2=3 a3=0 items=0 ppid=1 pid=8083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:35:55.197580 kernel: audit: type=1327 audit(1768876554.945:1255): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:54.945000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:35:55.227048 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 20 02:35:55.411597 kernel: audit: type=1105 audit(1768876555.304:1256): pid=8083 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:55.304000 audit[8083]: USER_START pid=8083 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:55.354000 audit[8094]: CRED_ACQ pid=8094 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:55.509263 kernel: audit: type=1103 audit(1768876555.354:1257): pid=8094 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:56.986305 sshd[8094]: Connection closed by 10.0.0.1 port 43150 Jan 20 02:35:56.984355 sshd-session[8083]: pam_unix(sshd:session): session closed for user core Jan 20 02:35:57.111153 kernel: audit: type=1106 audit(1768876556.987:1258): pid=8083 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:56.987000 audit[8083]: USER_END pid=8083 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:57.116880 systemd[1]: sshd@64-10.0.0.97:22-10.0.0.1:43150.service: Deactivated successfully. Jan 20 02:35:56.987000 audit[8083]: CRED_DISP pid=8083 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:57.217193 systemd[1]: session-66.scope: Deactivated successfully. Jan 20 02:35:57.271834 kernel: audit: type=1104 audit(1768876556.987:1259): pid=8083 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:35:57.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-10.0.0.97:22-10.0.0.1:43150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:35:57.344672 systemd-logind[1619]: Session 66 logged out. Waiting for processes to exit. Jan 20 02:35:57.383765 systemd-logind[1619]: Removed session 66. Jan 20 02:35:57.578792 kubelet[3053]: E0120 02:35:57.578729 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:35:58.566728 kubelet[3053]: E0120 02:35:58.561765 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:35:58.568402 kubelet[3053]: E0120 02:35:58.568275 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:36:00.548375 kubelet[3053]: E0120 02:36:00.540582 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:36:00.562758 kubelet[3053]: E0120 02:36:00.562633 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:36:01.608160 kubelet[3053]: E0120 02:36:01.604858 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:36:02.068705 systemd[1]: Started sshd@65-10.0.0.97:22-10.0.0.1:56428.service - OpenSSH per-connection server daemon (10.0.0.1:56428). Jan 20 02:36:02.087744 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:36:02.087921 kernel: audit: type=1130 audit(1768876562.072:1261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.97:22-10.0.0.1:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:02.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.97:22-10.0.0.1:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:02.631000 audit[8116]: USER_ACCT pid=8116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.647948 sshd-session[8116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:02.652357 kernel: audit: type=1101 audit(1768876562.631:1262): pid=8116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.652429 sshd[8116]: Accepted publickey for core from 10.0.0.1 port 56428 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:02.642000 audit[8116]: CRED_ACQ pid=8116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.694486 systemd-logind[1619]: New session 67 of user core. Jan 20 02:36:02.762648 kernel: audit: type=1103 audit(1768876562.642:1263): pid=8116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.762813 kernel: audit: type=1006 audit(1768876562.642:1264): pid=8116 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=67 res=1 Jan 20 02:36:02.642000 audit[8116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0fccf260 a2=3 a3=0 items=0 ppid=1 pid=8116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:02.817261 kernel: audit: type=1300 audit(1768876562.642:1264): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0fccf260 a2=3 a3=0 items=0 ppid=1 pid=8116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:02.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:02.849838 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 20 02:36:02.881900 kernel: audit: type=1327 audit(1768876562.642:1264): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:02.882000 audit[8116]: USER_START pid=8116 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.999309 kernel: audit: type=1105 audit(1768876562.882:1265): pid=8116 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.999607 kernel: audit: type=1103 audit(1768876562.904:1266): pid=8120 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:02.904000 audit[8120]: CRED_ACQ pid=8120 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:03.755465 sshd[8120]: Connection closed by 10.0.0.1 port 56428 Jan 20 02:36:03.758958 sshd-session[8116]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:03.764000 audit[8116]: USER_END pid=8116 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:03.798041 kernel: audit: type=1106 audit(1768876563.764:1267): pid=8116 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:03.798181 kernel: audit: type=1104 audit(1768876563.764:1268): pid=8116 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:03.764000 audit[8116]: CRED_DISP pid=8116 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:03.805822 systemd[1]: sshd@65-10.0.0.97:22-10.0.0.1:56428.service: Deactivated successfully. Jan 20 02:36:03.818776 systemd[1]: session-67.scope: Deactivated successfully. Jan 20 02:36:03.825799 systemd-logind[1619]: Session 67 logged out. Waiting for processes to exit. Jan 20 02:36:03.829861 systemd-logind[1619]: Removed session 67. Jan 20 02:36:03.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-10.0.0.97:22-10.0.0.1:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:05.516169 kubelet[3053]: E0120 02:36:05.515779 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:36:08.835488 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:36:08.835716 kernel: audit: type=1130 audit(1768876568.800:1270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.97:22-10.0.0.1:45842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:08.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.97:22-10.0.0.1:45842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:08.801000 systemd[1]: Started sshd@66-10.0.0.97:22-10.0.0.1:45842.service - OpenSSH per-connection server daemon (10.0.0.1:45842). Jan 20 02:36:09.417000 audit[8159]: USER_ACCT pid=8159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.425508 sshd[8159]: Accepted publickey for core from 10.0.0.1 port 45842 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:09.455712 kernel: audit: type=1101 audit(1768876569.417:1271): pid=8159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.449844 sshd-session[8159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:09.442000 audit[8159]: CRED_ACQ pid=8159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.506926 kernel: audit: type=1103 audit(1768876569.442:1272): pid=8159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.507069 kernel: audit: type=1006 audit(1768876569.442:1273): pid=8159 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=68 res=1 Jan 20 02:36:09.508657 systemd-logind[1619]: New session 68 of user core. Jan 20 02:36:09.442000 audit[8159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd231f85e0 a2=3 a3=0 items=0 ppid=1 pid=8159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:09.576504 kernel: audit: type=1300 audit(1768876569.442:1273): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd231f85e0 a2=3 a3=0 items=0 ppid=1 pid=8159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:09.442000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:09.584657 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 20 02:36:09.606779 kernel: audit: type=1327 audit(1768876569.442:1273): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:09.692000 audit[8159]: USER_START pid=8159 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.766585 kernel: audit: type=1105 audit(1768876569.692:1274): pid=8159 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.766756 kernel: audit: type=1103 audit(1768876569.723:1275): pid=8164 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:09.723000 audit[8164]: CRED_ACQ pid=8164 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:10.626597 kubelet[3053]: E0120 02:36:10.607435 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:10.659757 kubelet[3053]: E0120 02:36:10.659699 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:36:11.009365 sshd[8164]: Connection closed by 10.0.0.1 port 45842 Jan 20 02:36:11.018591 sshd-session[8159]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:11.019000 audit[8159]: USER_END pid=8159 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:11.070418 systemd-logind[1619]: Session 68 logged out. Waiting for processes to exit. Jan 20 02:36:11.107975 kernel: audit: type=1106 audit(1768876571.019:1276): pid=8159 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:11.104866 systemd[1]: sshd@66-10.0.0.97:22-10.0.0.1:45842.service: Deactivated successfully. Jan 20 02:36:11.019000 audit[8159]: CRED_DISP pid=8159 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:11.150659 systemd[1]: session-68.scope: Deactivated successfully. Jan 20 02:36:11.204598 kernel: audit: type=1104 audit(1768876571.019:1277): pid=8159 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:11.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-10.0.0.97:22-10.0.0.1:45842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:11.232693 systemd-logind[1619]: Removed session 68. Jan 20 02:36:11.548411 kubelet[3053]: E0120 02:36:11.545857 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:36:11.590237 kubelet[3053]: E0120 02:36:11.584340 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:36:15.525112 kubelet[3053]: E0120 02:36:15.521972 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:36:15.569567 kubelet[3053]: E0120 02:36:15.527708 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:36:16.107930 systemd[1]: Started sshd@67-10.0.0.97:22-10.0.0.1:40972.service - OpenSSH per-connection server daemon (10.0.0.1:40972). Jan 20 02:36:16.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.97:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:16.195788 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:36:16.195992 kernel: audit: type=1130 audit(1768876576.107:1279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.97:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:16.942000 audit[8179]: USER_ACCT pid=8179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:16.957145 sshd-session[8179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:16.962973 sshd[8179]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:17.011621 kernel: audit: type=1101 audit(1768876576.942:1280): pid=8179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:17.011760 kernel: audit: type=1103 audit(1768876576.943:1281): pid=8179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:16.943000 audit[8179]: CRED_ACQ pid=8179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:17.008145 systemd-logind[1619]: New session 69 of user core. Jan 20 02:36:17.078496 kernel: audit: type=1006 audit(1768876576.943:1282): pid=8179 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=69 res=1 Jan 20 02:36:16.943000 audit[8179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffd422e00 a2=3 a3=0 items=0 ppid=1 pid=8179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:17.144317 kernel: audit: type=1300 audit(1768876576.943:1282): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffd422e00 a2=3 a3=0 items=0 ppid=1 pid=8179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:17.147156 kernel: audit: type=1327 audit(1768876576.943:1282): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:16.943000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:17.145628 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 20 02:36:17.188000 audit[8179]: USER_START pid=8179 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:17.211000 audit[8183]: CRED_ACQ pid=8183 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:17.262579 kernel: audit: type=1105 audit(1768876577.188:1283): pid=8179 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:17.262739 kernel: audit: type=1103 audit(1768876577.211:1284): pid=8183 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:18.137208 sshd[8183]: Connection closed by 10.0.0.1 port 40972 Jan 20 02:36:18.149363 sshd-session[8179]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:18.162000 audit[8179]: USER_END pid=8179 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:18.194015 systemd[1]: sshd@67-10.0.0.97:22-10.0.0.1:40972.service: Deactivated successfully. Jan 20 02:36:18.244761 kernel: audit: type=1106 audit(1768876578.162:1285): pid=8179 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:18.244920 kernel: audit: type=1104 audit(1768876578.173:1286): pid=8179 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:18.173000 audit[8179]: CRED_DISP pid=8179 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:18.259659 systemd[1]: session-69.scope: Deactivated successfully. Jan 20 02:36:18.292280 systemd-logind[1619]: Session 69 logged out. Waiting for processes to exit. Jan 20 02:36:18.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-10.0.0.97:22-10.0.0.1:40972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:18.313694 systemd-logind[1619]: Removed session 69. Jan 20 02:36:19.525581 kubelet[3053]: E0120 02:36:19.525190 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:20.536929 kubelet[3053]: E0120 02:36:20.521980 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:36:22.515586 kubelet[3053]: E0120 02:36:22.515205 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:23.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.97:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:23.195690 systemd[1]: Started sshd@68-10.0.0.97:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Jan 20 02:36:23.219858 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:36:23.220016 kernel: audit: type=1130 audit(1768876583.194:1288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.97:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:23.542008 kubelet[3053]: E0120 02:36:23.532006 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:36:23.614595 kubelet[3053]: E0120 02:36:23.598486 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:36:24.393363 kernel: audit: type=1101 audit(1768876584.294:1289): pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:24.294000 audit[8197]: USER_ACCT pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:24.392017 sshd-session[8197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:24.406267 sshd[8197]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:24.351000 audit[8197]: CRED_ACQ pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:24.495591 kernel: audit: type=1103 audit(1768876584.351:1290): pid=8197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:24.495753 kernel: audit: type=1006 audit(1768876584.366:1291): pid=8197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=70 res=1 Jan 20 02:36:24.366000 audit[8197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe509e7ea0 a2=3 a3=0 items=0 ppid=1 pid=8197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:24.567300 systemd-logind[1619]: New session 70 of user core. Jan 20 02:36:24.614347 kernel: audit: type=1300 audit(1768876584.366:1291): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe509e7ea0 a2=3 a3=0 items=0 ppid=1 pid=8197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:24.614577 kernel: audit: type=1327 audit(1768876584.366:1291): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:24.366000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:24.688800 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 20 02:36:24.720438 kubelet[3053]: E0120 02:36:24.709769 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:36:24.870000 audit[8197]: USER_START pid=8197 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:24.967245 kernel: audit: type=1105 audit(1768876584.870:1292): pid=8197 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.022000 audit[8201]: CRED_ACQ pid=8201 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.104344 kernel: audit: type=1103 audit(1768876585.022:1293): pid=8201 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.716108 sshd[8201]: Connection closed by 10.0.0.1 port 40982 Jan 20 02:36:25.726280 sshd-session[8197]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:25.733000 audit[8197]: USER_END pid=8197 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.761060 systemd-logind[1619]: Session 70 logged out. Waiting for processes to exit. Jan 20 02:36:25.783473 systemd[1]: sshd@68-10.0.0.97:22-10.0.0.1:40982.service: Deactivated successfully. Jan 20 02:36:25.803002 kernel: audit: type=1106 audit(1768876585.733:1294): pid=8197 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.803245 kernel: audit: type=1104 audit(1768876585.735:1295): pid=8197 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.735000 audit[8197]: CRED_DISP pid=8197 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:25.813302 systemd[1]: session-70.scope: Deactivated successfully. Jan 20 02:36:25.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-10.0.0.97:22-10.0.0.1:40982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:25.826785 systemd-logind[1619]: Removed session 70. Jan 20 02:36:26.526199 kubelet[3053]: E0120 02:36:26.525370 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:36:29.542837 kubelet[3053]: E0120 02:36:29.534984 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:36:30.831677 systemd[1]: Started sshd@69-10.0.0.97:22-10.0.0.1:59740.service - OpenSSH per-connection server daemon (10.0.0.1:59740). Jan 20 02:36:30.894017 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:36:30.894214 kernel: audit: type=1130 audit(1768876590.826:1297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.97:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:30.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.97:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:32.279964 kernel: audit: type=1101 audit(1768876592.129:1298): pid=8216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:32.129000 audit[8216]: USER_ACCT pid=8216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:35.117335 sshd[8216]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:36.607493 kernel: audit: type=1103 audit(1768876596.415:1299): pid=8216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:36.415000 audit[8216]: CRED_ACQ pid=8216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:36.725207 kernel: audit: type=1006 audit(1768876596.630:1300): pid=8216 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=71 res=1 Jan 20 02:36:36.630000 audit[8216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f9527e0 a2=3 a3=0 items=0 ppid=1 pid=8216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:37.398625 kernel: audit: type=1300 audit(1768876596.630:1300): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5f9527e0 a2=3 a3=0 items=0 ppid=1 pid=8216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:37.398708 kernel: audit: type=1327 audit(1768876596.630:1300): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:36.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:37.314577 sshd-session[8216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:37.452770 kubelet[3053]: E0120 02:36:37.444009 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:36:37.634121 kubelet[3053]: E0120 02:36:37.628772 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.31s" Jan 20 02:36:37.671699 systemd-logind[1619]: New session 71 of user core. Jan 20 02:36:37.689705 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 20 02:36:37.944763 kernel: audit: type=1105 audit(1768876597.775:1301): pid=8216 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:37.775000 audit[8216]: USER_START pid=8216 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:38.024000 audit[8222]: CRED_ACQ pid=8222 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:38.087486 kernel: audit: type=1103 audit(1768876598.024:1302): pid=8222 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:38.186633 kubelet[3053]: E0120 02:36:38.177609 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:36:38.186633 kubelet[3053]: E0120 02:36:38.178057 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:36:38.186633 kubelet[3053]: E0120 02:36:38.178408 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:36:38.578648 kubelet[3053]: E0120 02:36:38.568744 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:36:39.893489 sshd[8222]: Connection closed by 10.0.0.1 port 59740 Jan 20 02:36:39.885031 sshd-session[8216]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:39.964839 kernel: audit: type=1106 audit(1768876599.895:1303): pid=8216 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:39.895000 audit[8216]: USER_END pid=8216 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:39.930910 systemd[1]: sshd@69-10.0.0.97:22-10.0.0.1:59740.service: Deactivated successfully. Jan 20 02:36:39.971672 systemd[1]: session-71.scope: Deactivated successfully. Jan 20 02:36:39.895000 audit[8216]: CRED_DISP pid=8216 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:40.029874 kernel: audit: type=1104 audit(1768876599.895:1304): pid=8216 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:40.030692 systemd-logind[1619]: Session 71 logged out. Waiting for processes to exit. Jan 20 02:36:40.087595 kernel: audit: type=1131 audit(1768876599.925:1305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.97:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:39.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-10.0.0.97:22-10.0.0.1:59740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:40.082641 systemd-logind[1619]: Removed session 71. Jan 20 02:36:40.521268 kubelet[3053]: E0120 02:36:40.520970 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:43.535256 kubelet[3053]: E0120 02:36:43.532213 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:36:43.579477 kubelet[3053]: E0120 02:36:43.575868 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:36:45.025368 systemd[1]: Started sshd@70-10.0.0.97:22-10.0.0.1:36226.service - OpenSSH per-connection server daemon (10.0.0.1:36226). Jan 20 02:36:45.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.97:22-10.0.0.1:36226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:45.102811 kernel: audit: type=1130 audit(1768876605.032:1306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.97:22-10.0.0.1:36226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:45.953000 audit[8266]: USER_ACCT pid=8266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:45.958877 sshd[8266]: Accepted publickey for core from 10.0.0.1 port 36226 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:45.985197 sshd-session[8266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:46.010771 kernel: audit: type=1101 audit(1768876605.953:1307): pid=8266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:45.957000 audit[8266]: CRED_ACQ pid=8266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:46.049364 kernel: audit: type=1103 audit(1768876605.957:1308): pid=8266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:45.957000 audit[8266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe81ffa0 a2=3 a3=0 items=0 ppid=1 pid=8266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:46.146830 kernel: audit: type=1006 audit(1768876605.957:1309): pid=8266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=72 res=1 Jan 20 02:36:46.146962 kernel: audit: type=1300 audit(1768876605.957:1309): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe81ffa0 a2=3 a3=0 items=0 ppid=1 pid=8266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:46.147010 kernel: audit: type=1327 audit(1768876605.957:1309): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:45.957000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:46.149651 systemd-logind[1619]: New session 72 of user core. Jan 20 02:36:46.173212 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 20 02:36:46.205000 audit[8266]: USER_START pid=8266 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:46.264916 kernel: audit: type=1105 audit(1768876606.205:1310): pid=8266 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:46.247000 audit[8272]: CRED_ACQ pid=8272 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:46.311012 kernel: audit: type=1103 audit(1768876606.247:1311): pid=8272 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:46.988023 sshd[8272]: Connection closed by 10.0.0.1 port 36226 Jan 20 02:36:46.990306 sshd-session[8266]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:47.000000 audit[8266]: USER_END pid=8266 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:47.030312 systemd[1]: sshd@70-10.0.0.97:22-10.0.0.1:36226.service: Deactivated successfully. Jan 20 02:36:47.039798 kernel: audit: type=1106 audit(1768876607.000:1312): pid=8266 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:47.000000 audit[8266]: CRED_DISP pid=8266 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:47.047006 systemd[1]: session-72.scope: Deactivated successfully. Jan 20 02:36:47.057841 systemd-logind[1619]: Session 72 logged out. Waiting for processes to exit. Jan 20 02:36:47.064821 kernel: audit: type=1104 audit(1768876607.000:1313): pid=8266 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:47.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-10.0.0.97:22-10.0.0.1:36226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:47.081248 systemd-logind[1619]: Removed session 72. Jan 20 02:36:51.523025 kubelet[3053]: E0120 02:36:51.522104 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:36:51.523025 kubelet[3053]: E0120 02:36:51.522475 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:36:52.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.97:22-10.0.0.1:36230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:52.149414 systemd[1]: Started sshd@71-10.0.0.97:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). Jan 20 02:36:52.187182 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:36:52.187331 kernel: audit: type=1130 audit(1768876612.148:1315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.97:22-10.0.0.1:36230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:52.559256 kubelet[3053]: E0120 02:36:52.546748 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:36:52.607761 kubelet[3053]: E0120 02:36:52.602126 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:36:52.778000 audit[8289]: USER_ACCT pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:52.809665 sshd[8289]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:52.820681 sshd-session[8289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:52.832938 kernel: audit: type=1101 audit(1768876612.778:1316): pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:52.808000 audit[8289]: CRED_ACQ pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:52.906890 kernel: audit: type=1103 audit(1768876612.808:1317): pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:52.907846 kernel: audit: type=1006 audit(1768876612.817:1318): pid=8289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=73 res=1 Jan 20 02:36:52.921893 systemd-logind[1619]: New session 73 of user core. Jan 20 02:36:52.817000 audit[8289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe201e9da0 a2=3 a3=0 items=0 ppid=1 pid=8289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:53.003272 kernel: audit: type=1300 audit(1768876612.817:1318): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe201e9da0 a2=3 a3=0 items=0 ppid=1 pid=8289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:53.018608 kernel: audit: type=1327 audit(1768876612.817:1318): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:52.817000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:53.033784 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 20 02:36:53.088000 audit[8289]: USER_START pid=8289 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:53.116000 audit[8293]: CRED_ACQ pid=8293 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:53.251364 kernel: audit: type=1105 audit(1768876613.088:1319): pid=8289 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:53.251486 kernel: audit: type=1103 audit(1768876613.116:1320): pid=8293 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:53.525406 kubelet[3053]: E0120 02:36:53.524860 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:36:54.408290 sshd[8293]: Connection closed by 10.0.0.1 port 36230 Jan 20 02:36:54.412894 sshd-session[8289]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:54.510000 audit[8289]: USER_END pid=8289 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:54.585439 kernel: audit: type=1106 audit(1768876614.510:1321): pid=8289 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:54.596733 kernel: audit: type=1104 audit(1768876614.510:1322): pid=8289 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:54.510000 audit[8289]: CRED_DISP pid=8289 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:54.573241 systemd[1]: Started sshd@72-10.0.0.97:22-10.0.0.1:51140.service - OpenSSH per-connection server daemon (10.0.0.1:51140). Jan 20 02:36:54.582323 systemd[1]: sshd@71-10.0.0.97:22-10.0.0.1:36230.service: Deactivated successfully. Jan 20 02:36:54.613199 systemd[1]: session-73.scope: Deactivated successfully. Jan 20 02:36:54.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.97:22-10.0.0.1:51140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:54.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-10.0.0.97:22-10.0.0.1:36230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:54.670904 systemd-logind[1619]: Session 73 logged out. Waiting for processes to exit. Jan 20 02:36:54.720967 systemd-logind[1619]: Removed session 73. Jan 20 02:36:55.077000 audit[8303]: USER_ACCT pid=8303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:55.083219 sshd[8303]: Accepted publickey for core from 10.0.0.1 port 51140 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:36:55.087000 audit[8303]: CRED_ACQ pid=8303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:55.087000 audit[8303]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7e2365d0 a2=3 a3=0 items=0 ppid=1 pid=8303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:36:55.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:36:55.107211 sshd-session[8303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:36:55.186644 systemd-logind[1619]: New session 74 of user core. Jan 20 02:36:55.213138 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 20 02:36:55.270000 audit[8303]: USER_START pid=8303 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:55.275000 audit[8311]: CRED_ACQ pid=8311 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:55.523933 kubelet[3053]: E0120 02:36:55.521971 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:36:58.264260 sshd[8311]: Connection closed by 10.0.0.1 port 51140 Jan 20 02:36:58.264448 sshd-session[8303]: pam_unix(sshd:session): session closed for user core Jan 20 02:36:58.273000 audit[8303]: USER_END pid=8303 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:58.368409 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 20 02:36:58.368889 kernel: audit: type=1106 audit(1768876618.273:1330): pid=8303 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:58.273000 audit[8303]: CRED_DISP pid=8303 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:58.443324 kernel: audit: type=1104 audit(1768876618.273:1331): pid=8303 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:36:58.512931 systemd[1]: sshd@72-10.0.0.97:22-10.0.0.1:51140.service: Deactivated successfully. Jan 20 02:36:58.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.97:22-10.0.0.1:51140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:58.607590 kernel: audit: type=1131 audit(1768876618.530:1332): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-10.0.0.97:22-10.0.0.1:51140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:58.587432 systemd[1]: session-74.scope: Deactivated successfully. Jan 20 02:36:58.656721 systemd-logind[1619]: Session 74 logged out. Waiting for processes to exit. Jan 20 02:36:58.699007 systemd[1]: Started sshd@73-10.0.0.97:22-10.0.0.1:51148.service - OpenSSH per-connection server daemon (10.0.0.1:51148). Jan 20 02:36:58.753245 kernel: audit: type=1130 audit(1768876618.697:1333): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.97:22-10.0.0.1:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:58.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.97:22-10.0.0.1:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:36:58.723410 systemd-logind[1619]: Removed session 74. Jan 20 02:37:00.026000 audit[8324]: USER_ACCT pid=8324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:00.063078 sshd[8324]: Accepted publickey for core from 10.0.0.1 port 51148 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:00.094871 sshd-session[8324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:00.210215 kernel: audit: type=1101 audit(1768876620.026:1334): pid=8324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:00.210379 kernel: audit: type=1103 audit(1768876620.045:1335): pid=8324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:00.045000 audit[8324]: CRED_ACQ pid=8324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:00.204912 systemd-logind[1619]: New session 75 of user core. Jan 20 02:37:00.244617 kernel: audit: type=1006 audit(1768876620.051:1336): pid=8324 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=75 res=1 Jan 20 02:37:00.051000 audit[8324]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4ecd3e00 a2=3 a3=0 items=0 ppid=1 pid=8324 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:00.248743 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 20 02:37:00.335418 kernel: audit: type=1300 audit(1768876620.051:1336): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4ecd3e00 a2=3 a3=0 items=0 ppid=1 pid=8324 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:00.335647 kernel: audit: type=1327 audit(1768876620.051:1336): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:00.051000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:00.392000 audit[8324]: USER_START pid=8324 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:00.503348 kernel: audit: type=1105 audit(1768876620.392:1337): pid=8324 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:00.499000 audit[8330]: CRED_ACQ pid=8330 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:02.590833 kubelet[3053]: E0120 02:37:02.590604 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:03.561486 kubelet[3053]: E0120 02:37:03.561429 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:37:04.547619 kubelet[3053]: E0120 02:37:04.537076 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:37:05.440604 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:05.440821 kernel: audit: type=1325 audit(1768876625.399:1339): table=filter:138 family=2 entries=14 op=nft_register_rule pid=8346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:37:05.399000 audit[8346]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=8346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:37:05.399000 audit[8346]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca5eb9e00 a2=0 a3=7ffca5eb9dec items=0 ppid=3166 pid=8346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:05.479892 sshd[8330]: Connection closed by 10.0.0.1 port 51148 Jan 20 02:37:05.491073 sshd-session[8324]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:05.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:37:05.526079 kernel: audit: type=1300 audit(1768876625.399:1339): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca5eb9e00 a2=0 a3=7ffca5eb9dec items=0 ppid=3166 pid=8346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:05.536049 kernel: audit: type=1327 audit(1768876625.399:1339): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:37:05.536145 kernel: audit: type=1325 audit(1768876625.506:1340): table=nat:139 family=2 entries=20 op=nft_register_rule pid=8346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:37:05.506000 audit[8346]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=8346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:37:05.536321 kubelet[3053]: E0120 02:37:05.517331 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:05.545495 kubelet[3053]: E0120 02:37:05.543241 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:37:05.550423 kernel: audit: type=1300 audit(1768876625.506:1340): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca5eb9e00 a2=0 a3=7ffca5eb9dec items=0 ppid=3166 pid=8346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:05.506000 audit[8346]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca5eb9e00 a2=0 a3=7ffca5eb9dec items=0 ppid=3166 pid=8346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:05.601678 kernel: audit: type=1327 audit(1768876625.506:1340): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:37:05.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:37:05.541000 audit[8324]: USER_END pid=8324 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:05.668437 kernel: audit: type=1106 audit(1768876625.541:1341): pid=8324 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:05.670885 kubelet[3053]: E0120 02:37:05.670656 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:37:05.672403 kernel: audit: type=1104 audit(1768876625.543:1342): pid=8324 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:05.543000 audit[8324]: CRED_DISP pid=8324 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:05.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.97:22-10.0.0.1:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.801748 kernel: audit: type=1131 audit(1768876625.748:1343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-10.0.0.97:22-10.0.0.1:51148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.753026 systemd[1]: sshd@73-10.0.0.97:22-10.0.0.1:51148.service: Deactivated successfully. Jan 20 02:37:05.788893 systemd[1]: session-75.scope: Deactivated successfully. Jan 20 02:37:05.790134 systemd[1]: session-75.scope: Consumed 1.329s CPU time, 46.5M memory peak. Jan 20 02:37:05.836666 systemd-logind[1619]: Session 75 logged out. Waiting for processes to exit. Jan 20 02:37:05.869920 systemd[1]: Started sshd@74-10.0.0.97:22-10.0.0.1:58344.service - OpenSSH per-connection server daemon (10.0.0.1:58344). Jan 20 02:37:05.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.97:22-10.0.0.1:58344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:05.880120 systemd-logind[1619]: Removed session 75. Jan 20 02:37:05.886750 kernel: audit: type=1130 audit(1768876625.869:1344): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.97:22-10.0.0.1:58344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:06.345000 audit[8351]: USER_ACCT pid=8351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:06.347315 sshd[8351]: Accepted publickey for core from 10.0.0.1 port 58344 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:06.362000 audit[8351]: CRED_ACQ pid=8351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:06.365000 audit[8351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdde2dbb80 a2=3 a3=0 items=0 ppid=1 pid=8351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:06.365000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:06.383124 sshd-session[8351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:06.457582 systemd-logind[1619]: New session 76 of user core. Jan 20 02:37:06.505787 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 20 02:37:06.539000 audit[8351]: USER_START pid=8351 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:06.571319 kubelet[3053]: E0120 02:37:06.543888 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:06.571319 kubelet[3053]: E0120 02:37:06.546739 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:37:06.579000 audit[8355]: CRED_ACQ pid=8355 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:06.909000 audit[8362]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=8362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:37:06.909000 audit[8362]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd2cbffc80 a2=0 a3=7ffd2cbffc6c items=0 ppid=3166 pid=8362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:06.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:37:06.941000 audit[8362]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=8362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:37:06.941000 audit[8362]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd2cbffc80 a2=0 a3=0 items=0 ppid=3166 pid=8362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:06.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:37:09.115708 sshd[8355]: Connection closed by 10.0.0.1 port 58344 Jan 20 02:37:09.115016 sshd-session[8351]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:09.145000 audit[8351]: USER_END pid=8351 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:09.147000 audit[8351]: CRED_DISP pid=8351 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:09.207975 systemd[1]: sshd@74-10.0.0.97:22-10.0.0.1:58344.service: Deactivated successfully. Jan 20 02:37:09.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-10.0.0.97:22-10.0.0.1:58344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:09.231906 systemd[1]: session-76.scope: Deactivated successfully. Jan 20 02:37:09.243466 systemd-logind[1619]: Session 76 logged out. Waiting for processes to exit. Jan 20 02:37:09.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.97:22-10.0.0.1:58356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:09.262483 systemd[1]: Started sshd@75-10.0.0.97:22-10.0.0.1:58356.service - OpenSSH per-connection server daemon (10.0.0.1:58356). Jan 20 02:37:09.275973 systemd-logind[1619]: Removed session 76. Jan 20 02:37:09.616124 kubelet[3053]: E0120 02:37:09.615074 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:37:10.047824 sshd[8393]: Accepted publickey for core from 10.0.0.1 port 58356 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:10.045000 audit[8393]: USER_ACCT pid=8393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:10.072828 sshd-session[8393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:10.064000 audit[8393]: CRED_ACQ pid=8393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:10.064000 audit[8393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ea79ed0 a2=3 a3=0 items=0 ppid=1 pid=8393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=77 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:10.064000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:10.141387 systemd-logind[1619]: New session 77 of user core. Jan 20 02:37:10.197032 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 20 02:37:10.271000 audit[8393]: USER_START pid=8393 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:10.290000 audit[8398]: CRED_ACQ pid=8398 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:11.210178 sshd[8398]: Connection closed by 10.0.0.1 port 58356 Jan 20 02:37:11.205564 sshd-session[8393]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:11.293289 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 20 02:37:11.293454 kernel: audit: type=1106 audit(1768876631.215:1361): pid=8393 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:11.215000 audit[8393]: USER_END pid=8393 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:11.241000 audit[8393]: CRED_DISP pid=8393 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:11.335236 systemd[1]: sshd@75-10.0.0.97:22-10.0.0.1:58356.service: Deactivated successfully. Jan 20 02:37:11.380686 kernel: audit: type=1104 audit(1768876631.241:1362): pid=8393 uid=0 auid=500 ses=77 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:11.380844 kernel: audit: type=1131 audit(1768876631.336:1363): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.97:22-10.0.0.1:58356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:11.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@75-10.0.0.97:22-10.0.0.1:58356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:11.416477 systemd[1]: session-77.scope: Deactivated successfully. Jan 20 02:37:11.459695 systemd-logind[1619]: Session 77 logged out. Waiting for processes to exit. Jan 20 02:37:11.462846 systemd-logind[1619]: Removed session 77. Jan 20 02:37:15.521106 kubelet[3053]: E0120 02:37:15.516640 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:16.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.97:22-10.0.0.1:57024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:16.243193 systemd[1]: Started sshd@76-10.0.0.97:22-10.0.0.1:57024.service - OpenSSH per-connection server daemon (10.0.0.1:57024). Jan 20 02:37:16.297885 kernel: audit: type=1130 audit(1768876636.241:1364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.97:22-10.0.0.1:57024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:16.740000 audit[8412]: USER_ACCT pid=8412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:16.746810 sshd[8412]: Accepted publickey for core from 10.0.0.1 port 57024 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:16.787000 audit[8412]: CRED_ACQ pid=8412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:16.802885 sshd-session[8412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:16.868085 kernel: audit: type=1101 audit(1768876636.740:1365): pid=8412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:16.868243 kernel: audit: type=1103 audit(1768876636.787:1366): pid=8412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:16.868318 kernel: audit: type=1006 audit(1768876636.787:1367): pid=8412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=78 res=1 Jan 20 02:37:16.949507 kernel: audit: type=1300 audit(1768876636.787:1367): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3dbbdd80 a2=3 a3=0 items=0 ppid=1 pid=8412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=78 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:16.787000 audit[8412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3dbbdd80 a2=3 a3=0 items=0 ppid=1 pid=8412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=78 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:16.787000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:16.971581 kernel: audit: type=1327 audit(1768876636.787:1367): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:17.007345 systemd-logind[1619]: New session 78 of user core. Jan 20 02:37:17.014508 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 20 02:37:17.112000 audit[8412]: USER_START pid=8412 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:17.191172 kernel: audit: type=1105 audit(1768876637.112:1368): pid=8412 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:17.191328 kernel: audit: type=1103 audit(1768876637.159:1369): pid=8418 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:17.159000 audit[8418]: CRED_ACQ pid=8418 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:17.534786 kubelet[3053]: E0120 02:37:17.532711 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:37:18.316347 sshd[8418]: Connection closed by 10.0.0.1 port 57024 Jan 20 02:37:18.310815 sshd-session[8412]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:18.311000 audit[8412]: USER_END pid=8412 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:18.373751 kernel: audit: type=1106 audit(1768876638.311:1370): pid=8412 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:18.340342 systemd-logind[1619]: Session 78 logged out. Waiting for processes to exit. Jan 20 02:37:18.356220 systemd[1]: sshd@76-10.0.0.97:22-10.0.0.1:57024.service: Deactivated successfully. Jan 20 02:37:18.311000 audit[8412]: CRED_DISP pid=8412 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:18.416104 kernel: audit: type=1104 audit(1768876638.311:1371): pid=8412 uid=0 auid=500 ses=78 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:18.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@76-10.0.0.97:22-10.0.0.1:57024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:18.379114 systemd[1]: session-78.scope: Deactivated successfully. Jan 20 02:37:18.425745 systemd-logind[1619]: Removed session 78. Jan 20 02:37:18.603691 containerd[1640]: time="2026-01-20T02:37:18.603496848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:37:18.840655 containerd[1640]: time="2026-01-20T02:37:18.840089648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:18.865603 containerd[1640]: time="2026-01-20T02:37:18.865121097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:37:18.865603 containerd[1640]: time="2026-01-20T02:37:18.865274132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:18.870379 kubelet[3053]: E0120 02:37:18.869597 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:37:18.870379 kubelet[3053]: E0120 02:37:18.869672 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:37:18.870379 kubelet[3053]: E0120 02:37:18.869905 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:18.893474 containerd[1640]: time="2026-01-20T02:37:18.893067881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:37:19.105751 containerd[1640]: time="2026-01-20T02:37:19.105684843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:19.119861 containerd[1640]: time="2026-01-20T02:37:19.119660254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:37:19.119861 containerd[1640]: time="2026-01-20T02:37:19.119752315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:19.122706 kubelet[3053]: E0120 02:37:19.121311 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:37:19.122706 kubelet[3053]: E0120 02:37:19.121400 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:37:19.122706 kubelet[3053]: E0120 02:37:19.121645 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-r829f_calico-apiserver(ca9f2980-346b-4927-8985-9cb6081e02db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:19.122706 kubelet[3053]: E0120 02:37:19.121688 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:37:19.141266 containerd[1640]: time="2026-01-20T02:37:19.129153829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:37:19.319807 containerd[1640]: time="2026-01-20T02:37:19.319745406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:19.393671 containerd[1640]: time="2026-01-20T02:37:19.393466801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:19.404034 containerd[1640]: time="2026-01-20T02:37:19.393894687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:37:19.404655 kubelet[3053]: E0120 02:37:19.404602 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:37:19.406120 kubelet[3053]: E0120 02:37:19.405613 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:37:19.406120 kubelet[3053]: E0120 02:37:19.405724 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-749b857967-xt4pg_calico-system(75bc6f23-38ce-4e96-aaf1-83d653850866): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:19.406120 kubelet[3053]: E0120 02:37:19.405778 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:37:19.557152 kubelet[3053]: E0120 02:37:19.548227 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:37:19.563803 containerd[1640]: time="2026-01-20T02:37:19.563631286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:37:19.736345 containerd[1640]: time="2026-01-20T02:37:19.720921876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:19.757573 containerd[1640]: time="2026-01-20T02:37:19.753686218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:37:19.757573 containerd[1640]: time="2026-01-20T02:37:19.753838329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:19.757831 kubelet[3053]: E0120 02:37:19.755674 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:37:19.757831 kubelet[3053]: E0120 02:37:19.755741 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:37:19.757831 kubelet[3053]: E0120 02:37:19.755840 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-746557d8fc-ztfh7_calico-system(e572f9c2-ce5a-4d3c-956a-a140a15040fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:19.757831 kubelet[3053]: E0120 02:37:19.755886 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:37:22.526459 containerd[1640]: time="2026-01-20T02:37:22.526407059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:37:22.658594 containerd[1640]: time="2026-01-20T02:37:22.658248399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:22.667346 containerd[1640]: time="2026-01-20T02:37:22.663752392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:37:22.667346 containerd[1640]: time="2026-01-20T02:37:22.663868127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:22.667798 kubelet[3053]: E0120 02:37:22.667752 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:37:22.673145 kubelet[3053]: E0120 02:37:22.672652 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:37:22.673145 kubelet[3053]: E0120 02:37:22.672771 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tfwc7_calico-system(4892884d-a213-4dd6-ab53-844c331ae6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:22.673145 kubelet[3053]: E0120 02:37:22.672820 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:37:23.387291 systemd[1]: Started sshd@77-10.0.0.97:22-10.0.0.1:57040.service - OpenSSH per-connection server daemon (10.0.0.1:57040). Jan 20 02:37:23.400688 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:23.401123 kernel: audit: type=1130 audit(1768876643.388:1373): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.97:22-10.0.0.1:57040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:23.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.97:22-10.0.0.1:57040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:24.061000 audit[8446]: USER_ACCT pid=8446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.071590 sshd-session[8446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:24.082385 sshd[8446]: Accepted publickey for core from 10.0.0.1 port 57040 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:24.117596 kernel: audit: type=1101 audit(1768876644.061:1374): pid=8446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.061000 audit[8446]: CRED_ACQ pid=8446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.141304 systemd-logind[1619]: New session 79 of user core. Jan 20 02:37:24.237472 kernel: audit: type=1103 audit(1768876644.061:1375): pid=8446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.237716 kernel: audit: type=1006 audit(1768876644.061:1376): pid=8446 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=79 res=1 Jan 20 02:37:24.237785 kernel: audit: type=1300 audit(1768876644.061:1376): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1c7855e0 a2=3 a3=0 items=0 ppid=1 pid=8446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=79 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:24.061000 audit[8446]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1c7855e0 a2=3 a3=0 items=0 ppid=1 pid=8446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=79 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:24.315298 kernel: audit: type=1327 audit(1768876644.061:1376): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:24.061000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:24.348697 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 20 02:37:24.423000 audit[8446]: USER_START pid=8446 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.542292 kernel: audit: type=1105 audit(1768876644.423:1377): pid=8446 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.542594 kernel: audit: type=1103 audit(1768876644.462:1378): pid=8450 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:24.462000 audit[8450]: CRED_ACQ pid=8450 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:25.509564 sshd[8450]: Connection closed by 10.0.0.1 port 57040 Jan 20 02:37:25.513042 sshd-session[8446]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:25.523000 audit[8446]: USER_END pid=8446 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:25.535717 systemd-logind[1619]: Session 79 logged out. Waiting for processes to exit. Jan 20 02:37:25.540904 systemd[1]: sshd@77-10.0.0.97:22-10.0.0.1:57040.service: Deactivated successfully. Jan 20 02:37:25.550976 systemd[1]: session-79.scope: Deactivated successfully. Jan 20 02:37:25.568094 systemd-logind[1619]: Removed session 79. Jan 20 02:37:25.600627 kernel: audit: type=1106 audit(1768876645.523:1379): pid=8446 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:25.523000 audit[8446]: CRED_DISP pid=8446 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:25.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@77-10.0.0.97:22-10.0.0.1:57040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:25.651859 kernel: audit: type=1104 audit(1768876645.523:1380): pid=8446 uid=0 auid=500 ses=79 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:30.654810 kubelet[3053]: E0120 02:37:30.636382 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:37:30.654810 kubelet[3053]: E0120 02:37:30.645676 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:37:30.783168 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:30.783345 kernel: audit: type=1130 audit(1768876650.706:1382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.97:22-10.0.0.1:50500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:30.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.97:22-10.0.0.1:50500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:30.708585 systemd[1]: Started sshd@78-10.0.0.97:22-10.0.0.1:50500.service - OpenSSH per-connection server daemon (10.0.0.1:50500). Jan 20 02:37:31.253000 audit[8470]: USER_ACCT pid=8470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.281407 sshd-session[8470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:31.305628 sshd[8470]: Accepted publickey for core from 10.0.0.1 port 50500 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:31.324173 kernel: audit: type=1101 audit(1768876651.253:1383): pid=8470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.270000 audit[8470]: CRED_ACQ pid=8470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.434216 kernel: audit: type=1103 audit(1768876651.270:1384): pid=8470 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.434395 kernel: audit: type=1006 audit(1768876651.272:1385): pid=8470 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=80 res=1 Jan 20 02:37:31.434436 kernel: audit: type=1300 audit(1768876651.272:1385): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff1684fc0 a2=3 a3=0 items=0 ppid=1 pid=8470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=80 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:31.272000 audit[8470]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff1684fc0 a2=3 a3=0 items=0 ppid=1 pid=8470 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=80 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:31.272000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:31.474310 systemd-logind[1619]: New session 80 of user core. Jan 20 02:37:31.498430 kernel: audit: type=1327 audit(1768876651.272:1385): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:31.518455 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 20 02:37:31.562000 audit[8470]: USER_START pid=8470 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.631743 kernel: audit: type=1105 audit(1768876651.562:1386): pid=8470 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.577000 audit[8483]: CRED_ACQ pid=8483 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:31.706171 kernel: audit: type=1103 audit(1768876651.577:1387): pid=8483 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:32.565640 containerd[1640]: time="2026-01-20T02:37:32.564603569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:37:32.576199 kubelet[3053]: E0120 02:37:32.572432 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:37:32.738080 containerd[1640]: time="2026-01-20T02:37:32.735709029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:32.755639 containerd[1640]: time="2026-01-20T02:37:32.755427126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:32.756028 containerd[1640]: time="2026-01-20T02:37:32.755895316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:37:32.756651 kubelet[3053]: E0120 02:37:32.756595 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:37:32.756867 kubelet[3053]: E0120 02:37:32.756804 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:37:32.766840 kubelet[3053]: E0120 02:37:32.766791 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-764db5c9d9-v64bv_calico-apiserver(4d193768-31ad-4962-ae34-80e85c7499df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:32.767333 kubelet[3053]: E0120 02:37:32.767297 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:37:32.909721 sshd[8483]: Connection closed by 10.0.0.1 port 50500 Jan 20 02:37:32.912148 sshd-session[8470]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:32.939000 audit[8470]: USER_END pid=8470 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:32.983490 systemd[1]: sshd@78-10.0.0.97:22-10.0.0.1:50500.service: Deactivated successfully. Jan 20 02:37:33.076136 kernel: audit: type=1106 audit(1768876652.939:1388): pid=8470 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:33.076291 kernel: audit: type=1104 audit(1768876652.943:1389): pid=8470 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:32.943000 audit[8470]: CRED_DISP pid=8470 uid=0 auid=500 ses=80 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:33.050354 systemd[1]: session-80.scope: Deactivated successfully. Jan 20 02:37:32.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@78-10.0.0.97:22-10.0.0.1:50500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:33.081729 systemd-logind[1619]: Session 80 logged out. Waiting for processes to exit. Jan 20 02:37:33.154232 systemd-logind[1619]: Removed session 80. Jan 20 02:37:33.547759 kubelet[3053]: E0120 02:37:33.533685 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:37:34.542488 kubelet[3053]: E0120 02:37:34.528391 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:37:38.026860 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:38.027056 kernel: audit: type=1130 audit(1768876657.980:1391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.97:22-10.0.0.1:33946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:37.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.97:22-10.0.0.1:33946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:37.981697 systemd[1]: Started sshd@79-10.0.0.97:22-10.0.0.1:33946.service - OpenSSH per-connection server daemon (10.0.0.1:33946). Jan 20 02:37:38.741187 sshd[8519]: Accepted publickey for core from 10.0.0.1 port 33946 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:38.732000 audit[8519]: USER_ACCT pid=8519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:38.757800 sshd-session[8519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:38.748000 audit[8519]: CRED_ACQ pid=8519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:38.819503 systemd-logind[1619]: New session 81 of user core. Jan 20 02:37:38.850684 kernel: audit: type=1101 audit(1768876658.732:1392): pid=8519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:38.850858 kernel: audit: type=1103 audit(1768876658.748:1393): pid=8519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:38.850915 kernel: audit: type=1006 audit(1768876658.748:1394): pid=8519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=81 res=1 Jan 20 02:37:38.748000 audit[8519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb00c3f50 a2=3 a3=0 items=0 ppid=1 pid=8519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=81 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:38.941928 kernel: audit: type=1300 audit(1768876658.748:1394): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb00c3f50 a2=3 a3=0 items=0 ppid=1 pid=8519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=81 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:38.946987 kernel: audit: type=1327 audit(1768876658.748:1394): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:38.748000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:38.964414 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 20 02:37:39.019000 audit[8519]: USER_START pid=8519 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:39.103234 kernel: audit: type=1105 audit(1768876659.019:1395): pid=8519 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:39.103376 kernel: audit: type=1103 audit(1768876659.069:1396): pid=8525 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:39.069000 audit[8525]: CRED_ACQ pid=8525 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:39.530075 kubelet[3053]: E0120 02:37:39.518494 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:39.946501 sshd[8525]: Connection closed by 10.0.0.1 port 33946 Jan 20 02:37:39.961650 sshd-session[8519]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:39.962000 audit[8519]: USER_END pid=8519 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:40.001780 systemd[1]: sshd@79-10.0.0.97:22-10.0.0.1:33946.service: Deactivated successfully. Jan 20 02:37:40.019994 systemd[1]: session-81.scope: Deactivated successfully. Jan 20 02:37:40.070124 kernel: audit: type=1106 audit(1768876659.962:1397): pid=8519 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:40.070303 kernel: audit: type=1104 audit(1768876659.962:1398): pid=8519 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:39.962000 audit[8519]: CRED_DISP pid=8519 uid=0 auid=500 ses=81 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:40.072512 systemd-logind[1619]: Session 81 logged out. Waiting for processes to exit. Jan 20 02:37:40.091415 systemd-logind[1619]: Removed session 81. Jan 20 02:37:40.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@79-10.0.0.97:22-10.0.0.1:33946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:41.539324 kubelet[3053]: E0120 02:37:41.535176 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:37:42.640668 containerd[1640]: time="2026-01-20T02:37:42.640620478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:37:42.818168 containerd[1640]: time="2026-01-20T02:37:42.802486389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:42.831211 containerd[1640]: time="2026-01-20T02:37:42.830693065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:37:42.831211 containerd[1640]: time="2026-01-20T02:37:42.830912232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:42.837658 kubelet[3053]: E0120 02:37:42.837153 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:37:42.837658 kubelet[3053]: E0120 02:37:42.837226 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:37:42.841306 kubelet[3053]: E0120 02:37:42.839641 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:42.852009 containerd[1640]: time="2026-01-20T02:37:42.851631549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:37:43.007366 containerd[1640]: time="2026-01-20T02:37:42.999127620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:37:43.007366 containerd[1640]: time="2026-01-20T02:37:43.006623301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:37:43.007366 containerd[1640]: time="2026-01-20T02:37:43.006926764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:37:43.009778 kubelet[3053]: E0120 02:37:43.007413 3053 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:37:43.009778 kubelet[3053]: E0120 02:37:43.007474 3053 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:37:43.009778 kubelet[3053]: E0120 02:37:43.007612 3053 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9lglv_calico-system(797382c1-6a9f-48bd-be88-5e85feeef509): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:37:43.009778 kubelet[3053]: E0120 02:37:43.007668 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:37:44.526201 kubelet[3053]: E0120 02:37:44.525059 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:37:45.000904 systemd[1]: Started sshd@80-10.0.0.97:22-10.0.0.1:37060.service - OpenSSH per-connection server daemon (10.0.0.1:37060). Jan 20 02:37:45.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.97:22-10.0.0.1:37060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:45.015696 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:45.015806 kernel: audit: type=1130 audit(1768876665.002:1400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.97:22-10.0.0.1:37060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:45.347000 audit[8538]: USER_ACCT pid=8538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.366201 sshd-session[8538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:45.368756 kernel: audit: type=1101 audit(1768876665.347:1401): pid=8538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.369002 sshd[8538]: Accepted publickey for core from 10.0.0.1 port 37060 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:45.362000 audit[8538]: CRED_ACQ pid=8538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.410613 systemd-logind[1619]: New session 82 of user core. Jan 20 02:37:45.435161 kernel: audit: type=1103 audit(1768876665.362:1402): pid=8538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.435294 kernel: audit: type=1006 audit(1768876665.362:1403): pid=8538 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=82 res=1 Jan 20 02:37:45.435339 kernel: audit: type=1300 audit(1768876665.362:1403): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c899770 a2=3 a3=0 items=0 ppid=1 pid=8538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=82 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:45.362000 audit[8538]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4c899770 a2=3 a3=0 items=0 ppid=1 pid=8538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=82 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:45.362000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:45.521099 kernel: audit: type=1327 audit(1768876665.362:1403): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:45.525274 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 20 02:37:45.649509 kubelet[3053]: E0120 02:37:45.644168 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:37:45.665000 audit[8538]: USER_START pid=8538 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.790703 kernel: audit: type=1105 audit(1768876665.665:1404): pid=8538 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.800391 kernel: audit: type=1103 audit(1768876665.683:1405): pid=8542 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:45.683000 audit[8542]: CRED_ACQ pid=8542 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:46.337318 sshd[8542]: Connection closed by 10.0.0.1 port 37060 Jan 20 02:37:46.343751 sshd-session[8538]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:46.430664 kernel: audit: type=1106 audit(1768876666.355:1406): pid=8538 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:46.355000 audit[8538]: USER_END pid=8538 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:46.392427 systemd[1]: sshd@80-10.0.0.97:22-10.0.0.1:37060.service: Deactivated successfully. Jan 20 02:37:46.427797 systemd[1]: session-82.scope: Deactivated successfully. Jan 20 02:37:46.485391 kernel: audit: type=1104 audit(1768876666.355:1407): pid=8538 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:46.355000 audit[8538]: CRED_DISP pid=8538 uid=0 auid=500 ses=82 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:46.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@80-10.0.0.97:22-10.0.0.1:37060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:46.466746 systemd-logind[1619]: Session 82 logged out. Waiting for processes to exit. Jan 20 02:37:46.474129 systemd-logind[1619]: Removed session 82. Jan 20 02:37:46.581147 kubelet[3053]: E0120 02:37:46.581085 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:37:47.516594 kubelet[3053]: E0120 02:37:47.513808 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:37:47.570203 kubelet[3053]: E0120 02:37:47.558905 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:37:51.412912 systemd[1]: Started sshd@81-10.0.0.97:22-10.0.0.1:37068.service - OpenSSH per-connection server daemon (10.0.0.1:37068). Jan 20 02:37:51.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.97:22-10.0.0.1:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:51.513170 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:51.513346 kernel: audit: type=1130 audit(1768876671.413:1409): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.97:22-10.0.0.1:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:52.113000 audit[8557]: USER_ACCT pid=8557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.144833 sshd[8557]: Accepted publickey for core from 10.0.0.1 port 37068 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:37:52.185397 kernel: audit: type=1101 audit(1768876672.113:1410): pid=8557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.184000 audit[8557]: CRED_ACQ pid=8557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.201808 sshd-session[8557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:37:52.314424 kernel: audit: type=1103 audit(1768876672.184:1411): pid=8557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.314670 kernel: audit: type=1006 audit(1768876672.184:1412): pid=8557 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=83 res=1 Jan 20 02:37:52.184000 audit[8557]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd39ca560 a2=3 a3=0 items=0 ppid=1 pid=8557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=83 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:52.341437 systemd-logind[1619]: New session 83 of user core. Jan 20 02:37:52.184000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:52.401199 kernel: audit: type=1300 audit(1768876672.184:1412): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd39ca560 a2=3 a3=0 items=0 ppid=1 pid=8557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=83 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:37:52.401360 kernel: audit: type=1327 audit(1768876672.184:1412): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:37:52.420165 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 20 02:37:52.543000 audit[8557]: USER_START pid=8557 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.660620 kernel: audit: type=1105 audit(1768876672.543:1413): pid=8557 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.700000 audit[8561]: CRED_ACQ pid=8561 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:52.818084 kernel: audit: type=1103 audit(1768876672.700:1414): pid=8561 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:54.026614 sshd[8561]: Connection closed by 10.0.0.1 port 37068 Jan 20 02:37:54.028121 sshd-session[8557]: pam_unix(sshd:session): session closed for user core Jan 20 02:37:54.042000 audit[8557]: USER_END pid=8557 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:54.136378 systemd-logind[1619]: Session 83 logged out. Waiting for processes to exit. Jan 20 02:37:54.050000 audit[8557]: CRED_DISP pid=8557 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:54.162680 systemd[1]: sshd@81-10.0.0.97:22-10.0.0.1:37068.service: Deactivated successfully. Jan 20 02:37:54.187440 systemd[1]: session-83.scope: Deactivated successfully. Jan 20 02:37:54.212328 systemd-logind[1619]: Removed session 83. Jan 20 02:37:54.235853 kernel: audit: type=1106 audit(1768876674.042:1415): pid=8557 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:54.236047 kernel: audit: type=1104 audit(1768876674.050:1416): pid=8557 uid=0 auid=500 ses=83 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:37:54.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@81-10.0.0.97:22-10.0.0.1:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:55.532071 kubelet[3053]: E0120 02:37:55.530903 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:37:56.610622 kubelet[3053]: E0120 02:37:56.607166 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:37:56.623095 kubelet[3053]: E0120 02:37:56.621315 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:37:57.645268 kubelet[3053]: E0120 02:37:57.643249 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:37:58.623018 kubelet[3053]: E0120 02:37:58.622695 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:37:59.095734 systemd[1]: Started sshd@82-10.0.0.97:22-10.0.0.1:39866.service - OpenSSH per-connection server daemon (10.0.0.1:39866). Jan 20 02:37:59.142285 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:37:59.142407 kernel: audit: type=1130 audit(1768876679.095:1418): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.97:22-10.0.0.1:39866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:37:59.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.97:22-10.0.0.1:39866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:18.472831 kernel: sched: DL replenish lagged too much Jan 20 02:38:18.493127 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 5759195439 wd_nsec: 5759192857 Jan 20 02:38:19.744736 systemd[1]: cri-containerd-abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7.scope: Deactivated successfully. Jan 20 02:38:19.868000 audit: BPF prog-id=96 op=UNLOAD Jan 20 02:38:19.852007 systemd[1]: cri-containerd-abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7.scope: Consumed 34.271s CPU time, 81.2M memory peak, 19M read from disk. Jan 20 02:38:19.963583 kernel: audit: type=1334 audit(1768876699.868:1419): prog-id=96 op=UNLOAD Jan 20 02:38:19.963771 kernel: audit: type=1334 audit(1768876699.868:1420): prog-id=100 op=UNLOAD Jan 20 02:38:19.963837 kernel: audit: type=1334 audit(1768876699.880:1421): prog-id=259 op=LOAD Jan 20 02:38:19.989775 kernel: audit: type=1334 audit(1768876699.880:1422): prog-id=81 op=UNLOAD Jan 20 02:38:19.868000 audit: BPF prog-id=100 op=UNLOAD Jan 20 02:38:19.880000 audit: BPF prog-id=259 op=LOAD Jan 20 02:38:19.880000 audit: BPF prog-id=81 op=UNLOAD Jan 20 02:38:20.185000 audit[8575]: USER_ACCT pid=8575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:20.283937 kernel: audit: type=1101 audit(1768876700.185:1423): pid=8575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:20.339508 sshd[8575]: Accepted publickey for core from 10.0.0.1 port 39866 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:20.446000 audit[8575]: CRED_ACQ pid=8575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:20.594811 kernel: audit: type=1103 audit(1768876700.446:1424): pid=8575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:20.618816 kernel: audit: type=1006 audit(1768876700.446:1425): pid=8575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=84 res=1 Jan 20 02:38:20.614774 sshd-session[8575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:20.446000 audit[8575]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9dc2aff0 a2=3 a3=0 items=0 ppid=1 pid=8575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=84 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:20.693981 kernel: audit: type=1300 audit(1768876700.446:1425): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9dc2aff0 a2=3 a3=0 items=0 ppid=1 pid=8575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=84 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:20.694180 kernel: audit: type=1327 audit(1768876700.446:1425): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:20.446000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:20.695679 containerd[1640]: time="2026-01-20T02:38:20.695099148Z" level=info msg="received container exit event container_id:\"abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7\" id:\"abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7\" pid:2871 exit_status:1 exited_at:{seconds:1768876700 nanos:414485847}" Jan 20 02:38:21.091636 systemd-logind[1619]: New session 84 of user core. Jan 20 02:38:21.164041 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 20 02:38:21.212000 audit[8575]: USER_START pid=8575 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:21.297139 kernel: audit: type=1105 audit(1768876701.212:1426): pid=8575 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:21.243000 audit[8582]: CRED_ACQ pid=8582 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:21.804584 kubelet[3053]: E0120 02:38:21.792296 3053 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="20.614s" Jan 20 02:38:21.887355 kubelet[3053]: E0120 02:38:21.887314 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:21.902652 kubelet[3053]: E0120 02:38:21.902608 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:21.920175 kubelet[3053]: E0120 02:38:21.920135 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:21.948472 kubelet[3053]: E0120 02:38:21.948362 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:21.991138 kubelet[3053]: E0120 02:38:21.991033 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:38:21.991640 kubelet[3053]: E0120 02:38:21.991217 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:38:21.991640 kubelet[3053]: E0120 02:38:21.991319 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:38:21.991640 kubelet[3053]: E0120 02:38:21.991432 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:38:22.081000 audit: BPF prog-id=260 op=LOAD Jan 20 02:38:22.092000 audit: BPF prog-id=91 op=UNLOAD Jan 20 02:38:22.079982 systemd[1]: cri-containerd-bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9.scope: Deactivated successfully. Jan 20 02:38:22.112000 audit: BPF prog-id=101 op=UNLOAD Jan 20 02:38:22.112000 audit: BPF prog-id=105 op=UNLOAD Jan 20 02:38:22.080596 systemd[1]: cri-containerd-bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9.scope: Consumed 25.750s CPU time, 37.3M memory peak, 14M read from disk. Jan 20 02:38:22.174838 containerd[1640]: time="2026-01-20T02:38:22.139398564Z" level=info msg="received container exit event container_id:\"bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9\" id:\"bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9\" pid:2882 exit_status:1 exited_at:{seconds:1768876702 nanos:124052163}" Jan 20 02:38:22.434598 kubelet[3053]: E0120 02:38:22.424933 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:38:22.542509 kubelet[3053]: E0120 02:38:22.542443 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:38:23.050073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7-rootfs.mount: Deactivated successfully. Jan 20 02:38:23.373000 audit[8575]: USER_END pid=8575 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:23.378000 audit[8575]: CRED_DISP pid=8575 uid=0 auid=500 ses=84 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:23.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@82-10.0.0.97:22-10.0.0.1:39866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:23.366191 sshd-session[8575]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:23.462943 sshd[8582]: Connection closed by 10.0.0.1 port 39866 Jan 20 02:38:23.453656 systemd[1]: sshd@82-10.0.0.97:22-10.0.0.1:39866.service: Deactivated successfully. Jan 20 02:38:23.491083 systemd-logind[1619]: Session 84 logged out. Waiting for processes to exit. Jan 20 02:38:23.508366 systemd[1]: session-84.scope: Deactivated successfully. Jan 20 02:38:23.561994 kubelet[3053]: E0120 02:38:23.557401 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:23.610713 systemd-logind[1619]: Removed session 84. Jan 20 02:38:23.836597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9-rootfs.mount: Deactivated successfully. Jan 20 02:38:24.267455 kubelet[3053]: I0120 02:38:24.256492 3053 scope.go:117] "RemoveContainer" containerID="bcc0085ed75a0f61a6e74a1c8c8b3ac353adbb27f50992517285dc114faf5de9" Jan 20 02:38:24.267455 kubelet[3053]: E0120 02:38:24.256683 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:24.336078 kubelet[3053]: I0120 02:38:24.320291 3053 scope.go:117] "RemoveContainer" containerID="abfeb7e53b538232cdaec1ef2b946fa9fdf53e21d361312cdc9eaece6c5496c7" Jan 20 02:38:24.336078 kubelet[3053]: E0120 02:38:24.320421 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:24.385576 containerd[1640]: time="2026-01-20T02:38:24.385103810Z" level=info msg="CreateContainer within sandbox \"35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 02:38:24.412783 containerd[1640]: time="2026-01-20T02:38:24.411358097Z" level=info msg="CreateContainer within sandbox \"39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 02:38:24.979481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555948263.mount: Deactivated successfully. Jan 20 02:38:25.006769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227525851.mount: Deactivated successfully. Jan 20 02:38:25.059955 containerd[1640]: time="2026-01-20T02:38:25.057847332Z" level=info msg="Container e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:25.100324 containerd[1640]: time="2026-01-20T02:38:25.090708782Z" level=info msg="Container b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:38:25.275686 containerd[1640]: time="2026-01-20T02:38:25.271021217Z" level=info msg="CreateContainer within sandbox \"39384d34a04d23681e40005f6d340275dfdc1cb7b4da6d980d4d8394f6e7eeac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176\"" Jan 20 02:38:25.288023 containerd[1640]: time="2026-01-20T02:38:25.281591702Z" level=info msg="StartContainer for \"e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176\"" Jan 20 02:38:25.288023 containerd[1640]: time="2026-01-20T02:38:25.282758596Z" level=info msg="CreateContainer within sandbox \"35266ffcbfefb91d99c73abb389f64dc351f36f770e4f3376793cfb932defa8a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb\"" Jan 20 02:38:25.296169 containerd[1640]: time="2026-01-20T02:38:25.296122680Z" level=info msg="StartContainer for \"b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb\"" Jan 20 02:38:25.296838 containerd[1640]: time="2026-01-20T02:38:25.296800141Z" level=info msg="connecting to shim e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176" address="unix:///run/containerd/s/32b8188b67ea639612ce43054422b7793c18eaa1371062e87392b043712ef6b9" protocol=ttrpc version=3 Jan 20 02:38:25.376759 containerd[1640]: time="2026-01-20T02:38:25.376705973Z" level=info msg="connecting to shim b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb" address="unix:///run/containerd/s/d3c2171262e3e4df2eb76152e3733ae9c22476785b1c4bc19387a26f34ca0c68" protocol=ttrpc version=3 Jan 20 02:38:25.688398 systemd[1]: Started cri-containerd-e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176.scope - libcontainer container e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176. Jan 20 02:38:26.036786 systemd[1]: Started cri-containerd-b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb.scope - libcontainer container b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb. Jan 20 02:38:26.402114 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 20 02:38:26.402326 kernel: audit: type=1334 audit(1768876706.396:1435): prog-id=261 op=LOAD Jan 20 02:38:26.396000 audit: BPF prog-id=261 op=LOAD Jan 20 02:38:26.430972 kernel: audit: type=1334 audit(1768876706.414:1436): prog-id=262 op=LOAD Jan 20 02:38:26.414000 audit: BPF prog-id=262 op=LOAD Jan 20 02:38:26.414000 audit[8641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194238 a2=98 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.525000 kernel: audit: type=1300 audit(1768876706.414:1436): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194238 a2=98 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.525158 kernel: audit: type=1327 audit(1768876706.414:1436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.525306 kubelet[3053]: E0120 02:38:26.520107 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:26.606606 kernel: audit: type=1334 audit(1768876706.414:1437): prog-id=262 op=UNLOAD Jan 20 02:38:26.414000 audit: BPF prog-id=262 op=UNLOAD Jan 20 02:38:26.414000 audit[8641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.676191 kernel: audit: type=1300 audit(1768876706.414:1437): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.676413 kernel: audit: type=1327 audit(1768876706.414:1437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.740440 kernel: audit: type=1334 audit(1768876706.414:1438): prog-id=263 op=LOAD Jan 20 02:38:26.414000 audit: BPF prog-id=263 op=LOAD Jan 20 02:38:26.414000 audit[8641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194488 a2=98 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.765011 kernel: audit: type=1300 audit(1768876706.414:1438): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000194488 a2=98 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.765165 kernel: audit: type=1327 audit(1768876706.414:1438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.414000 audit: BPF prog-id=264 op=LOAD Jan 20 02:38:26.414000 audit[8641]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000194218 a2=98 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.415000 audit: BPF prog-id=264 op=UNLOAD Jan 20 02:38:26.415000 audit[8641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.415000 audit: BPF prog-id=263 op=UNLOAD Jan 20 02:38:26.415000 audit[8641]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.422000 audit: BPF prog-id=265 op=LOAD Jan 20 02:38:26.415000 audit: BPF prog-id=266 op=LOAD Jan 20 02:38:26.415000 audit[8641]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001946e8 a2=98 a3=0 items=0 ppid=2738 pid=8641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238393235303135343164303638333939316564633936376636323931 Jan 20 02:38:26.441000 audit: BPF prog-id=267 op=LOAD Jan 20 02:38:26.441000 audit[8640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:26.441000 audit: BPF prog-id=267 op=UNLOAD Jan 20 02:38:26.441000 audit[8640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:26.442000 audit: BPF prog-id=268 op=LOAD Jan 20 02:38:26.442000 audit[8640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:26.442000 audit: BPF prog-id=269 op=LOAD Jan 20 02:38:26.442000 audit[8640]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:26.442000 audit: BPF prog-id=269 op=UNLOAD Jan 20 02:38:26.442000 audit[8640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:26.442000 audit: BPF prog-id=268 op=UNLOAD Jan 20 02:38:26.442000 audit[8640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:26.480000 audit: BPF prog-id=270 op=LOAD Jan 20 02:38:26.480000 audit[8640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2719 pid=8640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:26.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537316666313134333336343330333561303333653530373537323430 Jan 20 02:38:27.047926 containerd[1640]: time="2026-01-20T02:38:27.046984144Z" level=info msg="StartContainer for \"e71ff11433643035a033e507572408f3e7e149ad47e50add0c8e33a4ea78d176\" returns successfully" Jan 20 02:38:27.071339 containerd[1640]: time="2026-01-20T02:38:27.069776170Z" level=info msg="StartContainer for \"b892501541d0683991edc967f6291c8a46628bb69a3dd84a0a1e597adcdd8ebb\" returns successfully" Jan 20 02:38:27.803890 kubelet[3053]: E0120 02:38:27.802177 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:27.882270 kubelet[3053]: E0120 02:38:27.882059 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:28.440360 systemd[1]: Started sshd@83-10.0.0.97:22-10.0.0.1:41822.service - OpenSSH per-connection server daemon (10.0.0.1:41822). Jan 20 02:38:28.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@83-10.0.0.97:22-10.0.0.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:28.540434 kubelet[3053]: E0120 02:38:28.521379 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:28.889929 kubelet[3053]: E0120 02:38:28.888224 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:28.929226 kubelet[3053]: E0120 02:38:28.921461 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:29.010000 audit[8702]: USER_ACCT pid=8702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:29.020376 sshd[8702]: Accepted publickey for core from 10.0.0.1 port 41822 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:29.028000 audit[8702]: CRED_ACQ pid=8702 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:29.028000 audit[8702]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff347d4180 a2=3 a3=0 items=0 ppid=1 pid=8702 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=85 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:29.028000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:29.038124 sshd-session[8702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:29.073319 systemd-logind[1619]: New session 85 of user core. Jan 20 02:38:29.103353 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 20 02:38:29.141000 audit[8702]: USER_START pid=8702 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:29.170000 audit[8706]: CRED_ACQ pid=8706 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:29.462033 sshd[8706]: Connection closed by 10.0.0.1 port 41822 Jan 20 02:38:29.469321 sshd-session[8702]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:29.504000 audit[8702]: USER_END pid=8702 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:29.504000 audit[8702]: CRED_DISP pid=8702 uid=0 auid=500 ses=85 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:29.527745 systemd[1]: sshd@83-10.0.0.97:22-10.0.0.1:41822.service: Deactivated successfully. Jan 20 02:38:29.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@83-10.0.0.97:22-10.0.0.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:29.535261 systemd[1]: session-85.scope: Deactivated successfully. Jan 20 02:38:29.597096 systemd-logind[1619]: Session 85 logged out. Waiting for processes to exit. Jan 20 02:38:29.626101 systemd-logind[1619]: Removed session 85. Jan 20 02:38:29.937747 kubelet[3053]: E0120 02:38:29.920480 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:33.544263 kubelet[3053]: E0120 02:38:33.537809 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:38:34.527056 kubelet[3053]: E0120 02:38:34.525386 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:38:34.543072 kubelet[3053]: E0120 02:38:34.539769 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:38:34.616761 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 20 02:38:34.617296 kernel: audit: type=1130 audit(1768876714.577:1460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@84-10.0.0.97:22-10.0.0.1:49234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:34.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@84-10.0.0.97:22-10.0.0.1:49234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:34.625715 kubelet[3053]: E0120 02:38:34.594441 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:38:34.578336 systemd[1]: Started sshd@84-10.0.0.97:22-10.0.0.1:49234.service - OpenSSH per-connection server daemon (10.0.0.1:49234). Jan 20 02:38:35.187815 kernel: audit: type=1101 audit(1768876715.122:1461): pid=8722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.122000 audit[8722]: USER_ACCT pid=8722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.197309 sshd[8722]: Accepted publickey for core from 10.0.0.1 port 49234 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:35.220501 sshd-session[8722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:35.218000 audit[8722]: CRED_ACQ pid=8722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.266196 kernel: audit: type=1103 audit(1768876715.218:1462): pid=8722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.278704 systemd-logind[1619]: New session 86 of user core. Jan 20 02:38:35.302098 kernel: audit: type=1006 audit(1768876715.218:1463): pid=8722 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=86 res=1 Jan 20 02:38:35.218000 audit[8722]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd03bc13d0 a2=3 a3=0 items=0 ppid=1 pid=8722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=86 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:35.310215 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 20 02:38:35.380578 kernel: audit: type=1300 audit(1768876715.218:1463): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd03bc13d0 a2=3 a3=0 items=0 ppid=1 pid=8722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=86 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:35.380725 kernel: audit: type=1327 audit(1768876715.218:1463): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:35.218000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:35.388000 audit[8722]: USER_START pid=8722 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.460710 kernel: audit: type=1105 audit(1768876715.388:1464): pid=8722 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.416000 audit[8726]: CRED_ACQ pid=8726 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.518021 kernel: audit: type=1103 audit(1768876715.416:1465): pid=8726 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:35.531042 kubelet[3053]: E0120 02:38:35.530284 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:38:35.588727 kubelet[3053]: E0120 02:38:35.581230 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:38:36.071331 kubelet[3053]: E0120 02:38:36.071160 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:36.752253 sshd[8726]: Connection closed by 10.0.0.1 port 49234 Jan 20 02:38:36.762299 sshd-session[8722]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:36.788000 audit[8722]: USER_END pid=8722 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:36.823261 systemd-logind[1619]: Session 86 logged out. Waiting for processes to exit. Jan 20 02:38:36.827608 kernel: audit: type=1106 audit(1768876716.788:1466): pid=8722 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:36.872054 kernel: audit: type=1104 audit(1768876716.788:1467): pid=8722 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:36.788000 audit[8722]: CRED_DISP pid=8722 uid=0 auid=500 ses=86 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:36.833402 systemd[1]: sshd@84-10.0.0.97:22-10.0.0.1:49234.service: Deactivated successfully. Jan 20 02:38:36.864036 systemd[1]: session-86.scope: Deactivated successfully. Jan 20 02:38:36.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@84-10.0.0.97:22-10.0.0.1:49234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:36.898476 systemd-logind[1619]: Removed session 86. Jan 20 02:38:41.888665 systemd[1]: Started sshd@85-10.0.0.97:22-10.0.0.1:49236.service - OpenSSH per-connection server daemon (10.0.0.1:49236). Jan 20 02:38:41.918652 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:38:41.922940 kernel: audit: type=1130 audit(1768876721.887:1469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@85-10.0.0.97:22-10.0.0.1:49236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:41.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@85-10.0.0.97:22-10.0.0.1:49236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:42.470000 audit[8771]: USER_ACCT pid=8771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.507474 sshd[8771]: Accepted publickey for core from 10.0.0.1 port 49236 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:42.516997 kernel: audit: type=1101 audit(1768876722.470:1470): pid=8771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.518482 sshd-session[8771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:42.513000 audit[8771]: CRED_ACQ pid=8771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.556938 kernel: audit: type=1103 audit(1768876722.513:1471): pid=8771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.584611 kernel: audit: type=1006 audit(1768876722.513:1472): pid=8771 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=87 res=1 Jan 20 02:38:42.513000 audit[8771]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcab172740 a2=3 a3=0 items=0 ppid=1 pid=8771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=87 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:42.683005 kernel: audit: type=1300 audit(1768876722.513:1472): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcab172740 a2=3 a3=0 items=0 ppid=1 pid=8771 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=87 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:42.670240 systemd-logind[1619]: New session 87 of user core. Jan 20 02:38:42.513000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:42.707931 kernel: audit: type=1327 audit(1768876722.513:1472): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:42.711750 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 20 02:38:42.758000 audit[8771]: USER_START pid=8771 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.812758 kernel: audit: type=1105 audit(1768876722.758:1473): pid=8771 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.812998 kernel: audit: type=1103 audit(1768876722.773:1474): pid=8775 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:42.773000 audit[8775]: CRED_ACQ pid=8775 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:43.729145 sshd[8775]: Connection closed by 10.0.0.1 port 49236 Jan 20 02:38:43.743892 sshd-session[8771]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:43.757000 audit[8771]: USER_END pid=8771 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:43.866048 kernel: audit: type=1106 audit(1768876723.757:1475): pid=8771 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:43.757000 audit[8771]: CRED_DISP pid=8771 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:43.880170 systemd[1]: sshd@85-10.0.0.97:22-10.0.0.1:49236.service: Deactivated successfully. Jan 20 02:38:43.934008 kernel: audit: type=1104 audit(1768876723.757:1476): pid=8771 uid=0 auid=500 ses=87 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:43.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@85-10.0.0.97:22-10.0.0.1:49236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:43.943342 systemd[1]: session-87.scope: Deactivated successfully. Jan 20 02:38:43.977652 systemd-logind[1619]: Session 87 logged out. Waiting for processes to exit. Jan 20 02:38:44.019076 systemd-logind[1619]: Removed session 87. Jan 20 02:38:44.046789 kubelet[3053]: E0120 02:38:44.046744 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:46.111843 kubelet[3053]: E0120 02:38:46.111090 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:46.491639 kubelet[3053]: E0120 02:38:46.489394 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:47.527249 kubelet[3053]: E0120 02:38:47.524704 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:38:47.527249 kubelet[3053]: E0120 02:38:47.525199 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:38:48.544893 kubelet[3053]: E0120 02:38:48.544732 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:38:48.561266 kubelet[3053]: E0120 02:38:48.558346 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:38:48.821724 systemd[1]: Started sshd@86-10.0.0.97:22-10.0.0.1:36528.service - OpenSSH per-connection server daemon (10.0.0.1:36528). Jan 20 02:38:48.843968 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:38:48.844152 kernel: audit: type=1130 audit(1768876728.834:1478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@86-10.0.0.97:22-10.0.0.1:36528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:48.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@86-10.0.0.97:22-10.0.0.1:36528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:49.453000 audit[8788]: USER_ACCT pid=8788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:49.486268 sshd-session[8788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:49.536202 sshd[8788]: Accepted publickey for core from 10.0.0.1 port 36528 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:49.571241 kernel: audit: type=1101 audit(1768876729.453:1479): pid=8788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:49.481000 audit[8788]: CRED_ACQ pid=8788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:49.603237 kubelet[3053]: E0120 02:38:49.557508 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:38:49.656510 kubelet[3053]: E0120 02:38:49.656172 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:38:49.621594 systemd-logind[1619]: New session 88 of user core. Jan 20 02:38:49.702292 kernel: audit: type=1103 audit(1768876729.481:1480): pid=8788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:49.702489 kernel: audit: type=1006 audit(1768876729.481:1481): pid=8788 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=88 res=1 Jan 20 02:38:49.702701 kernel: audit: type=1300 audit(1768876729.481:1481): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd892c5c80 a2=3 a3=0 items=0 ppid=1 pid=8788 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=88 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:49.481000 audit[8788]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd892c5c80 a2=3 a3=0 items=0 ppid=1 pid=8788 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=88 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:49.481000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:49.808471 kernel: audit: type=1327 audit(1768876729.481:1481): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:49.827867 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 20 02:38:49.894000 audit[8788]: USER_START pid=8788 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:49.930000 audit[8792]: CRED_ACQ pid=8792 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.098084 kernel: audit: type=1105 audit(1768876729.894:1482): pid=8788 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.098271 kernel: audit: type=1103 audit(1768876729.930:1483): pid=8792 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.775343 sshd[8792]: Connection closed by 10.0.0.1 port 36528 Jan 20 02:38:50.775228 sshd-session[8788]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:50.786000 audit[8788]: USER_END pid=8788 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.810201 systemd[1]: sshd@86-10.0.0.97:22-10.0.0.1:36528.service: Deactivated successfully. Jan 20 02:38:50.823413 systemd[1]: session-88.scope: Deactivated successfully. Jan 20 02:38:50.786000 audit[8788]: CRED_DISP pid=8788 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.835667 systemd-logind[1619]: Session 88 logged out. Waiting for processes to exit. Jan 20 02:38:50.879652 systemd-logind[1619]: Removed session 88. Jan 20 02:38:50.892710 kernel: audit: type=1106 audit(1768876730.786:1484): pid=8788 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.892946 kernel: audit: type=1104 audit(1768876730.786:1485): pid=8788 uid=0 auid=500 ses=88 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:50.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@86-10.0.0.97:22-10.0.0.1:36528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:54.573301 kubelet[3053]: E0120 02:38:54.572260 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:38:55.888270 systemd[1]: Started sshd@87-10.0.0.97:22-10.0.0.1:36498.service - OpenSSH per-connection server daemon (10.0.0.1:36498). Jan 20 02:38:55.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@87-10.0.0.97:22-10.0.0.1:36498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:55.903967 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:38:55.904104 kernel: audit: type=1130 audit(1768876735.886:1487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@87-10.0.0.97:22-10.0.0.1:36498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:56.731000 audit[8807]: USER_ACCT pid=8807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:56.757005 sshd[8807]: Accepted publickey for core from 10.0.0.1 port 36498 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:38:56.777698 sshd-session[8807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:38:56.761000 audit[8807]: CRED_ACQ pid=8807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:56.859260 kernel: audit: type=1101 audit(1768876736.731:1488): pid=8807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:56.859423 kernel: audit: type=1103 audit(1768876736.761:1489): pid=8807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:56.864267 systemd-logind[1619]: New session 89 of user core. Jan 20 02:38:56.898625 kernel: audit: type=1006 audit(1768876736.761:1490): pid=8807 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=89 res=1 Jan 20 02:38:56.761000 audit[8807]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff630f1650 a2=3 a3=0 items=0 ppid=1 pid=8807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=89 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:56.761000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:56.981676 kernel: audit: type=1300 audit(1768876736.761:1490): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff630f1650 a2=3 a3=0 items=0 ppid=1 pid=8807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=89 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:38:56.981902 kernel: audit: type=1327 audit(1768876736.761:1490): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:38:56.999980 systemd[1]: Started session-89.scope - Session 89 of User core. Jan 20 02:38:57.072000 audit[8807]: USER_START pid=8807 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:57.094000 audit[8811]: CRED_ACQ pid=8811 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:57.229000 kernel: audit: type=1105 audit(1768876737.072:1491): pid=8807 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:57.229895 kernel: audit: type=1103 audit(1768876737.094:1492): pid=8811 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:58.555851 kubelet[3053]: E0120 02:38:58.544742 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:38:58.639456 sshd[8811]: Connection closed by 10.0.0.1 port 36498 Jan 20 02:38:58.640607 sshd-session[8807]: pam_unix(sshd:session): session closed for user core Jan 20 02:38:58.653000 audit[8807]: USER_END pid=8807 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:58.726596 kernel: audit: type=1106 audit(1768876738.653:1493): pid=8807 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:58.777908 kernel: audit: type=1104 audit(1768876738.653:1494): pid=8807 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:58.653000 audit[8807]: CRED_DISP pid=8807 uid=0 auid=500 ses=89 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:38:58.740492 systemd[1]: sshd@87-10.0.0.97:22-10.0.0.1:36498.service: Deactivated successfully. Jan 20 02:38:58.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@87-10.0.0.97:22-10.0.0.1:36498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:38:58.756571 systemd[1]: session-89.scope: Deactivated successfully. Jan 20 02:38:58.796744 systemd-logind[1619]: Session 89 logged out. Waiting for processes to exit. Jan 20 02:38:58.822397 systemd-logind[1619]: Removed session 89. Jan 20 02:38:59.532903 kubelet[3053]: E0120 02:38:59.530902 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:39:01.528899 kubelet[3053]: E0120 02:39:01.523991 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:39:01.528899 kubelet[3053]: E0120 02:39:01.526915 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:39:03.544890 kubelet[3053]: E0120 02:39:03.537480 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:39:03.555621 kubelet[3053]: E0120 02:39:03.551211 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:39:03.785714 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:03.785925 kernel: audit: type=1130 audit(1768876743.767:1496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@88-10.0.0.97:22-10.0.0.1:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:03.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@88-10.0.0.97:22-10.0.0.1:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:03.768493 systemd[1]: Started sshd@88-10.0.0.97:22-10.0.0.1:36512.service - OpenSSH per-connection server daemon (10.0.0.1:36512). Jan 20 02:39:04.940000 audit[8839]: USER_ACCT pid=8839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.007439 sshd[8839]: Accepted publickey for core from 10.0.0.1 port 36512 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:05.016276 kernel: audit: type=1101 audit(1768876744.940:1497): pid=8839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.019750 sshd-session[8839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:04.974000 audit[8839]: CRED_ACQ pid=8839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.060614 kernel: audit: type=1103 audit(1768876744.974:1498): pid=8839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.085703 kernel: audit: type=1006 audit(1768876744.980:1499): pid=8839 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=90 res=1 Jan 20 02:39:04.980000 audit[8839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe393d420 a2=3 a3=0 items=0 ppid=1 pid=8839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=90 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:05.119909 systemd-logind[1619]: New session 90 of user core. Jan 20 02:39:05.152803 kernel: audit: type=1300 audit(1768876744.980:1499): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe393d420 a2=3 a3=0 items=0 ppid=1 pid=8839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=90 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:05.152888 kernel: audit: type=1327 audit(1768876744.980:1499): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:04.980000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:05.368920 systemd[1]: Started session-90.scope - Session 90 of User core. Jan 20 02:39:05.469000 audit[8839]: USER_START pid=8839 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.618589 kernel: audit: type=1105 audit(1768876745.469:1500): pid=8839 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.533000 audit[8845]: CRED_ACQ pid=8845 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:05.810595 kernel: audit: type=1103 audit(1768876745.533:1501): pid=8845 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:07.626187 sshd[8845]: Connection closed by 10.0.0.1 port 36512 Jan 20 02:39:07.639000 audit[8839]: USER_END pid=8839 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:07.624880 sshd-session[8839]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:07.651513 systemd[1]: sshd@88-10.0.0.97:22-10.0.0.1:36512.service: Deactivated successfully. Jan 20 02:39:07.656355 systemd[1]: session-90.scope: Deactivated successfully. Jan 20 02:39:07.657052 systemd-logind[1619]: Session 90 logged out. Waiting for processes to exit. Jan 20 02:39:07.670642 systemd-logind[1619]: Removed session 90. Jan 20 02:39:07.732571 kernel: audit: type=1106 audit(1768876747.639:1502): pid=8839 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:07.735113 kernel: audit: type=1104 audit(1768876747.639:1503): pid=8839 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:07.639000 audit[8839]: CRED_DISP pid=8839 uid=0 auid=500 ses=90 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:07.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@88-10.0.0.97:22-10.0.0.1:36512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:07.868000 audit[8870]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=8870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:39:07.868000 audit[8870]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6aa0a470 a2=0 a3=7ffc6aa0a45c items=0 ppid=3166 pid=8870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:07.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:39:07.951000 audit[8870]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=8870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:39:07.951000 audit[8870]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc6aa0a470 a2=0 a3=7ffc6aa0a45c items=0 ppid=3166 pid=8870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:07.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:39:10.535432 kubelet[3053]: E0120 02:39:10.535094 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:39:12.576924 kubelet[3053]: E0120 02:39:12.575610 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:39:12.687630 systemd[1]: Started sshd@89-10.0.0.97:22-10.0.0.1:57112.service - OpenSSH per-connection server daemon (10.0.0.1:57112). Jan 20 02:39:12.734216 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 02:39:12.734435 kernel: audit: type=1130 audit(1768876752.687:1507): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@89-10.0.0.97:22-10.0.0.1:57112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:12.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@89-10.0.0.97:22-10.0.0.1:57112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:13.525824 kubelet[3053]: E0120 02:39:13.524904 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:39:13.555000 audit[8900]: USER_ACCT pid=8900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.563184 sshd[8900]: Accepted publickey for core from 10.0.0.1 port 57112 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:13.573255 sshd-session[8900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:13.615511 kernel: audit: type=1101 audit(1768876753.555:1508): pid=8900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.562000 audit[8900]: CRED_ACQ pid=8900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.699023 systemd-logind[1619]: New session 91 of user core. Jan 20 02:39:13.749914 kernel: audit: type=1103 audit(1768876753.562:1509): pid=8900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.750117 kernel: audit: type=1006 audit(1768876753.562:1510): pid=8900 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=91 res=1 Jan 20 02:39:13.764170 kernel: audit: type=1300 audit(1768876753.562:1510): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1d036e50 a2=3 a3=0 items=0 ppid=1 pid=8900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=91 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:13.562000 audit[8900]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1d036e50 a2=3 a3=0 items=0 ppid=1 pid=8900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=91 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:13.562000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:13.807665 systemd[1]: Started session-91.scope - Session 91 of User core. Jan 20 02:39:13.835141 kernel: audit: type=1327 audit(1768876753.562:1510): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:13.833000 audit[8900]: USER_START pid=8900 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.928687 kernel: audit: type=1105 audit(1768876753.833:1511): pid=8900 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.928867 kernel: audit: type=1103 audit(1768876753.841:1512): pid=8904 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:13.841000 audit[8904]: CRED_ACQ pid=8904 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:14.530348 kubelet[3053]: E0120 02:39:14.530287 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:39:15.242388 sshd[8904]: Connection closed by 10.0.0.1 port 57112 Jan 20 02:39:15.248114 sshd-session[8900]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:15.271000 audit[8900]: USER_END pid=8900 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:15.316737 systemd[1]: sshd@89-10.0.0.97:22-10.0.0.1:57112.service: Deactivated successfully. Jan 20 02:39:15.365511 systemd[1]: session-91.scope: Deactivated successfully. Jan 20 02:39:15.271000 audit[8900]: CRED_DISP pid=8900 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:15.389376 systemd-logind[1619]: Session 91 logged out. Waiting for processes to exit. Jan 20 02:39:15.402469 systemd-logind[1619]: Removed session 91. Jan 20 02:39:15.418927 kernel: audit: type=1106 audit(1768876755.271:1513): pid=8900 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:15.419037 kernel: audit: type=1104 audit(1768876755.271:1514): pid=8900 uid=0 auid=500 ses=91 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:15.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@89-10.0.0.97:22-10.0.0.1:57112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:16.558346 kubelet[3053]: E0120 02:39:16.544635 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:39:16.589448 kubelet[3053]: E0120 02:39:16.589378 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:39:20.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@90-10.0.0.97:22-10.0.0.1:45710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:20.404185 systemd[1]: Started sshd@90-10.0.0.97:22-10.0.0.1:45710.service - OpenSSH per-connection server daemon (10.0.0.1:45710). Jan 20 02:39:20.500980 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:20.501172 kernel: audit: type=1130 audit(1768876760.402:1516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@90-10.0.0.97:22-10.0.0.1:45710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:21.616000 audit[8918]: USER_ACCT pid=8918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:21.624802 sshd[8918]: Accepted publickey for core from 10.0.0.1 port 45710 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:21.681583 sshd-session[8918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:21.746032 kernel: audit: type=1101 audit(1768876761.616:1517): pid=8918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:21.746207 kernel: audit: type=1103 audit(1768876761.678:1518): pid=8918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:21.678000 audit[8918]: CRED_ACQ pid=8918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:21.759607 systemd-logind[1619]: New session 92 of user core. Jan 20 02:39:21.801855 kernel: audit: type=1006 audit(1768876761.678:1519): pid=8918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=92 res=1 Jan 20 02:39:21.678000 audit[8918]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec38fe3c0 a2=3 a3=0 items=0 ppid=1 pid=8918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=92 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:21.894874 kernel: audit: type=1300 audit(1768876761.678:1519): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec38fe3c0 a2=3 a3=0 items=0 ppid=1 pid=8918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=92 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:21.895042 kernel: audit: type=1327 audit(1768876761.678:1519): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:21.678000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:21.935015 systemd[1]: Started session-92.scope - Session 92 of User core. Jan 20 02:39:22.029000 audit[8918]: USER_START pid=8918 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:22.119631 kernel: audit: type=1105 audit(1768876762.029:1520): pid=8918 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:22.073000 audit[8922]: CRED_ACQ pid=8922 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:22.224194 kernel: audit: type=1103 audit(1768876762.073:1521): pid=8922 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:22.583005 kubelet[3053]: E0120 02:39:22.582074 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:39:24.097881 sshd[8922]: Connection closed by 10.0.0.1 port 45710 Jan 20 02:39:24.096347 sshd-session[8918]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:24.100000 audit[8918]: USER_END pid=8918 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:24.127310 systemd[1]: sshd@90-10.0.0.97:22-10.0.0.1:45710.service: Deactivated successfully. Jan 20 02:39:24.134739 systemd[1]: session-92.scope: Deactivated successfully. Jan 20 02:39:24.148990 systemd-logind[1619]: Session 92 logged out. Waiting for processes to exit. Jan 20 02:39:24.161817 kernel: audit: type=1106 audit(1768876764.100:1522): pid=8918 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:24.109000 audit[8918]: CRED_DISP pid=8918 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:24.165617 systemd-logind[1619]: Removed session 92. Jan 20 02:39:24.209133 kernel: audit: type=1104 audit(1768876764.109:1523): pid=8918 uid=0 auid=500 ses=92 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:24.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@90-10.0.0.97:22-10.0.0.1:45710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:25.544458 kubelet[3053]: E0120 02:39:25.544402 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:39:25.546482 kubelet[3053]: E0120 02:39:25.545872 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:39:27.529893 kubelet[3053]: E0120 02:39:27.522501 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:39:27.529893 kubelet[3053]: E0120 02:39:27.525442 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:39:29.188776 systemd[1]: Started sshd@91-10.0.0.97:22-10.0.0.1:47978.service - OpenSSH per-connection server daemon (10.0.0.1:47978). Jan 20 02:39:29.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@91-10.0.0.97:22-10.0.0.1:47978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:29.263134 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:29.263326 kernel: audit: type=1130 audit(1768876769.187:1525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@91-10.0.0.97:22-10.0.0.1:47978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:29.518457 kubelet[3053]: E0120 02:39:29.517470 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:39:29.927000 audit[8936]: USER_ACCT pid=8936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:29.957916 sshd[8936]: Accepted publickey for core from 10.0.0.1 port 47978 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:29.962650 sshd-session[8936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:29.992850 kernel: audit: type=1101 audit(1768876769.927:1526): pid=8936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:29.992992 kernel: audit: type=1103 audit(1768876769.936:1527): pid=8936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:29.936000 audit[8936]: CRED_ACQ pid=8936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:30.031051 kernel: audit: type=1006 audit(1768876769.936:1528): pid=8936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=93 res=1 Jan 20 02:39:29.936000 audit[8936]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc578bf360 a2=3 a3=0 items=0 ppid=1 pid=8936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=93 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:30.066228 systemd-logind[1619]: New session 93 of user core. Jan 20 02:39:30.114382 kernel: audit: type=1300 audit(1768876769.936:1528): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc578bf360 a2=3 a3=0 items=0 ppid=1 pid=8936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=93 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:29.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:30.133003 kernel: audit: type=1327 audit(1768876769.936:1528): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:30.143607 systemd[1]: Started session-93.scope - Session 93 of User core. Jan 20 02:39:30.172000 audit[8936]: USER_START pid=8936 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:30.199642 kernel: audit: type=1105 audit(1768876770.172:1529): pid=8936 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:30.198000 audit[8940]: CRED_ACQ pid=8940 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:30.242847 kernel: audit: type=1103 audit(1768876770.198:1530): pid=8940 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:31.514467 sshd[8940]: Connection closed by 10.0.0.1 port 47978 Jan 20 02:39:31.519101 sshd-session[8936]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:31.527000 audit[8936]: USER_END pid=8936 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:31.617126 kernel: audit: type=1106 audit(1768876771.527:1531): pid=8936 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:31.617293 kernel: audit: type=1104 audit(1768876771.527:1532): pid=8936 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:31.527000 audit[8936]: CRED_DISP pid=8936 uid=0 auid=500 ses=93 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:31.597000 systemd[1]: sshd@91-10.0.0.97:22-10.0.0.1:47978.service: Deactivated successfully. Jan 20 02:39:31.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@91-10.0.0.97:22-10.0.0.1:47978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:31.698020 systemd[1]: session-93.scope: Deactivated successfully. Jan 20 02:39:31.729293 systemd-logind[1619]: Session 93 logged out. Waiting for processes to exit. Jan 20 02:39:31.760476 systemd-logind[1619]: Removed session 93. Jan 20 02:39:32.562469 kubelet[3053]: E0120 02:39:32.561845 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:35.567999 kubelet[3053]: E0120 02:39:35.555726 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:39:36.519586 kubelet[3053]: E0120 02:39:36.519014 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:36.556838 kubelet[3053]: E0120 02:39:36.556693 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:39:36.721190 systemd[1]: Started sshd@92-10.0.0.97:22-10.0.0.1:48988.service - OpenSSH per-connection server daemon (10.0.0.1:48988). Jan 20 02:39:36.806674 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:36.831023 kernel: audit: type=1130 audit(1768876776.718:1534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@92-10.0.0.97:22-10.0.0.1:48988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:36.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@92-10.0.0.97:22-10.0.0.1:48988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:37.371448 sshd[8955]: Accepted publickey for core from 10.0.0.1 port 48988 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:37.365000 audit[8955]: USER_ACCT pid=8955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.415507 sshd-session[8955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:37.443722 kernel: audit: type=1101 audit(1768876777.365:1535): pid=8955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.390000 audit[8955]: CRED_ACQ pid=8955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.506982 systemd-logind[1619]: New session 94 of user core. Jan 20 02:39:37.531864 kernel: audit: type=1103 audit(1768876777.390:1536): pid=8955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.557477 systemd[1]: Started session-94.scope - Session 94 of User core. Jan 20 02:39:37.638646 kernel: audit: type=1006 audit(1768876777.390:1537): pid=8955 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=94 res=1 Jan 20 02:39:37.778643 kernel: audit: type=1300 audit(1768876777.390:1537): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe17f60cd0 a2=3 a3=0 items=0 ppid=1 pid=8955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=94 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:37.778733 kernel: audit: type=1327 audit(1768876777.390:1537): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:37.780879 kernel: audit: type=1105 audit(1768876777.635:1538): pid=8955 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.390000 audit[8955]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe17f60cd0 a2=3 a3=0 items=0 ppid=1 pid=8955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=94 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:37.390000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:37.635000 audit[8955]: USER_START pid=8955 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.921658 kernel: audit: type=1103 audit(1768876777.740:1539): pid=8959 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:37.740000 audit[8959]: CRED_ACQ pid=8959 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:38.557977 kubelet[3053]: E0120 02:39:38.556135 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:39:38.992669 sshd[8959]: Connection closed by 10.0.0.1 port 48988 Jan 20 02:39:38.993491 sshd-session[8955]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:39.010000 audit[8955]: USER_END pid=8955 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:39.079363 kernel: audit: type=1106 audit(1768876779.010:1540): pid=8955 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:39.089369 systemd[1]: sshd@92-10.0.0.97:22-10.0.0.1:48988.service: Deactivated successfully. Jan 20 02:39:39.010000 audit[8955]: CRED_DISP pid=8955 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:39.117359 systemd[1]: session-94.scope: Deactivated successfully. Jan 20 02:39:39.143666 kernel: audit: type=1104 audit(1768876779.010:1541): pid=8955 uid=0 auid=500 ses=94 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:39.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@92-10.0.0.97:22-10.0.0.1:48988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:39.190242 systemd-logind[1619]: Session 94 logged out. Waiting for processes to exit. Jan 20 02:39:39.231602 systemd-logind[1619]: Removed session 94. Jan 20 02:39:40.539977 kubelet[3053]: E0120 02:39:40.536386 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:39:40.562279 kubelet[3053]: E0120 02:39:40.562122 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:39:42.529589 kubelet[3053]: E0120 02:39:42.529427 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:39:44.075832 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:44.075993 kernel: audit: type=1130 audit(1768876784.059:1543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@93-10.0.0.97:22-10.0.0.1:48990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:44.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@93-10.0.0.97:22-10.0.0.1:48990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:44.060292 systemd[1]: Started sshd@93-10.0.0.97:22-10.0.0.1:48990.service - OpenSSH per-connection server daemon (10.0.0.1:48990). Jan 20 02:39:44.771000 audit[8999]: USER_ACCT pid=8999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:44.826582 sshd[8999]: Accepted publickey for core from 10.0.0.1 port 48990 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:44.843838 kernel: audit: type=1101 audit(1768876784.771:1544): pid=8999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:44.843982 kernel: audit: type=1103 audit(1768876784.810:1545): pid=8999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:44.810000 audit[8999]: CRED_ACQ pid=8999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:44.835578 sshd-session[8999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:44.810000 audit[8999]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff5589eb0 a2=3 a3=0 items=0 ppid=1 pid=8999 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=95 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:44.971412 systemd-logind[1619]: New session 95 of user core. Jan 20 02:39:45.013879 kernel: audit: type=1006 audit(1768876784.810:1546): pid=8999 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=95 res=1 Jan 20 02:39:45.014051 kernel: audit: type=1300 audit(1768876784.810:1546): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff5589eb0 a2=3 a3=0 items=0 ppid=1 pid=8999 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=95 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:45.014099 kernel: audit: type=1327 audit(1768876784.810:1546): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:44.810000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:45.037211 systemd[1]: Started session-95.scope - Session 95 of User core. Jan 20 02:39:45.114000 audit[8999]: USER_START pid=8999 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:45.214920 kernel: audit: type=1105 audit(1768876785.114:1547): pid=8999 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:45.188000 audit[9003]: CRED_ACQ pid=9003 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:45.293377 kernel: audit: type=1103 audit(1768876785.188:1548): pid=9003 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:46.315114 sshd[9003]: Connection closed by 10.0.0.1 port 48990 Jan 20 02:39:46.313836 sshd-session[8999]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:46.327000 audit[8999]: USER_END pid=8999 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:46.399026 kernel: audit: type=1106 audit(1768876786.327:1549): pid=8999 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:46.399200 kernel: audit: type=1104 audit(1768876786.346:1550): pid=8999 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:46.346000 audit[8999]: CRED_DISP pid=8999 uid=0 auid=500 ses=95 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:46.412244 systemd[1]: sshd@93-10.0.0.97:22-10.0.0.1:48990.service: Deactivated successfully. Jan 20 02:39:46.425866 systemd[1]: session-95.scope: Deactivated successfully. Jan 20 02:39:46.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@93-10.0.0.97:22-10.0.0.1:48990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:46.449070 systemd-logind[1619]: Session 95 logged out. Waiting for processes to exit. Jan 20 02:39:46.466809 systemd-logind[1619]: Removed session 95. Jan 20 02:39:46.576935 kubelet[3053]: E0120 02:39:46.574251 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:48.528598 kubelet[3053]: E0120 02:39:48.527042 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:39:49.521065 kubelet[3053]: E0120 02:39:49.520390 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:50.522707 kubelet[3053]: E0120 02:39:50.519077 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9lglv" podUID="797382c1-6a9f-48bd-be88-5e85feeef509" Jan 20 02:39:51.402463 systemd[1]: Started sshd@94-10.0.0.97:22-10.0.0.1:56874.service - OpenSSH per-connection server daemon (10.0.0.1:56874). Jan 20 02:39:51.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@94-10.0.0.97:22-10.0.0.1:56874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:51.416210 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:51.416342 kernel: audit: type=1130 audit(1768876791.401:1552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@94-10.0.0.97:22-10.0.0.1:56874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:51.528954 kubelet[3053]: E0120 02:39:51.517449 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:51.802000 audit[9018]: USER_ACCT pid=9018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:51.819882 sshd[9018]: Accepted publickey for core from 10.0.0.1 port 56874 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:51.823358 sshd-session[9018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:51.935881 kernel: audit: type=1101 audit(1768876791.802:1553): pid=9018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:51.936876 kernel: audit: type=1103 audit(1768876791.820:1554): pid=9018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:51.937268 kernel: audit: type=1006 audit(1768876791.820:1555): pid=9018 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=96 res=1 Jan 20 02:39:51.937414 kernel: audit: type=1300 audit(1768876791.820:1555): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc29bf6ca0 a2=3 a3=0 items=0 ppid=1 pid=9018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=96 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:51.820000 audit[9018]: CRED_ACQ pid=9018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:51.820000 audit[9018]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc29bf6ca0 a2=3 a3=0 items=0 ppid=1 pid=9018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=96 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:51.974463 kernel: audit: type=1327 audit(1768876791.820:1555): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:51.820000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:52.016772 systemd-logind[1619]: New session 96 of user core. Jan 20 02:39:52.041052 systemd[1]: Started session-96.scope - Session 96 of User core. Jan 20 02:39:52.054000 audit[9018]: USER_START pid=9018 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:52.088630 kernel: audit: type=1105 audit(1768876792.054:1556): pid=9018 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:52.088000 audit[9022]: CRED_ACQ pid=9022 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:52.115778 kernel: audit: type=1103 audit(1768876792.088:1557): pid=9022 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:52.941926 sshd[9022]: Connection closed by 10.0.0.1 port 56874 Jan 20 02:39:52.949889 sshd-session[9018]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:52.962000 audit[9018]: USER_END pid=9018 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:53.003324 systemd[1]: sshd@94-10.0.0.97:22-10.0.0.1:56874.service: Deactivated successfully. Jan 20 02:39:53.039862 systemd[1]: session-96.scope: Deactivated successfully. Jan 20 02:39:53.056202 kernel: audit: type=1106 audit(1768876792.962:1558): pid=9018 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:52.962000 audit[9018]: CRED_DISP pid=9018 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:53.063797 systemd-logind[1619]: Session 96 logged out. Waiting for processes to exit. Jan 20 02:39:53.099626 systemd-logind[1619]: Removed session 96. Jan 20 02:39:53.125897 kernel: audit: type=1104 audit(1768876792.962:1559): pid=9018 uid=0 auid=500 ses=96 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:53.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@94-10.0.0.97:22-10.0.0.1:56874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:53.533859 kubelet[3053]: E0120 02:39:53.530828 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-v64bv" podUID="4d193768-31ad-4962-ae34-80e85c7499df" Jan 20 02:39:53.543893 kubelet[3053]: E0120 02:39:53.537478 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-764db5c9d9-r829f" podUID="ca9f2980-346b-4927-8985-9cb6081e02db" Jan 20 02:39:54.528031 kubelet[3053]: E0120 02:39:54.527477 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-749b857967-xt4pg" podUID="75bc6f23-38ce-4e96-aaf1-83d653850866" Jan 20 02:39:56.522583 kubelet[3053]: E0120 02:39:56.522246 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tfwc7" podUID="4892884d-a213-4dd6-ab53-844c331ae6d1" Jan 20 02:39:57.513878 kubelet[3053]: E0120 02:39:57.513495 3053 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:39:58.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@95-10.0.0.97:22-10.0.0.1:52556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:58.065137 systemd[1]: Started sshd@95-10.0.0.97:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Jan 20 02:39:58.076417 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:39:58.076576 kernel: audit: type=1130 audit(1768876798.064:1561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@95-10.0.0.97:22-10.0.0.1:52556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:58.649000 audit[9035]: USER_ACCT pid=9035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:58.660796 sshd[9035]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:sTlEJX1WBbtyXV4Mr40u3GfIbI2QMQzAxYQZtXp6mu4 Jan 20 02:39:58.667973 sshd-session[9035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:39:58.713201 kernel: audit: type=1101 audit(1768876798.649:1562): pid=9035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:58.651000 audit[9035]: CRED_ACQ pid=9035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:58.768894 systemd-logind[1619]: New session 97 of user core. Jan 20 02:39:58.811205 kernel: audit: type=1103 audit(1768876798.651:1563): pid=9035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:58.811328 kernel: audit: type=1006 audit(1768876798.651:1564): pid=9035 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=97 res=1 Jan 20 02:39:58.811400 kernel: audit: type=1300 audit(1768876798.651:1564): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12b66ec0 a2=3 a3=0 items=0 ppid=1 pid=9035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=97 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:58.651000 audit[9035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc12b66ec0 a2=3 a3=0 items=0 ppid=1 pid=9035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=97 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:39:58.885955 kernel: audit: type=1327 audit(1768876798.651:1564): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:58.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:39:58.923701 systemd[1]: Started session-97.scope - Session 97 of User core. Jan 20 02:39:58.946000 audit[9035]: USER_START pid=9035 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:58.996778 kernel: audit: type=1105 audit(1768876798.946:1565): pid=9035 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:58.961000 audit[9039]: CRED_ACQ pid=9039 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:59.065692 kernel: audit: type=1103 audit(1768876798.961:1566): pid=9039 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:59.527197 kubelet[3053]: E0120 02:39:59.527115 3053 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-746557d8fc-ztfh7" podUID="e572f9c2-ce5a-4d3c-956a-a140a15040fb" Jan 20 02:39:59.857105 sshd[9039]: Connection closed by 10.0.0.1 port 52556 Jan 20 02:39:59.857338 sshd-session[9035]: pam_unix(sshd:session): session closed for user core Jan 20 02:39:59.873000 audit[9035]: USER_END pid=9035 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:59.898153 systemd[1]: sshd@95-10.0.0.97:22-10.0.0.1:52556.service: Deactivated successfully. Jan 20 02:39:59.873000 audit[9035]: CRED_DISP pid=9035 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:59.919740 systemd[1]: session-97.scope: Deactivated successfully. Jan 20 02:39:59.933908 kernel: audit: type=1106 audit(1768876799.873:1567): pid=9035 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:59.934066 kernel: audit: type=1104 audit(1768876799.873:1568): pid=9035 uid=0 auid=500 ses=97 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:39:59.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@95-10.0.0.97:22-10.0.0.1:52556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:39:59.943817 systemd-logind[1619]: Session 97 logged out. Waiting for processes to exit. Jan 20 02:39:59.954185 systemd-logind[1619]: Removed session 97.